Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Disclaimer: I develop uselessd, probably have a warped mindset from being a Luddite who values transparency, and evil stuff like that.

The author of this piece makes the classic mistake of equating the init system as the process manager and process supervisor. These are, in fact, all separate stages. The init system runs as PID 1 and strictly speaking, the sole responsibility is to daemonize, reap its children, set the session and process group IDs, and optionally exec the process manager. The process manager then defines a basic framework for stopping, starting, restarting and checking status for services, at a minimum. The process supervisor then applies resource limits (or even has those as separate tools, like perp does with its runtools), process monitoring (whether through ptrace(2), cgroups, PID files, jails or whatnot), autorestart, inotify(7)/kqueue handlers, system load diagnostics and so forth. The shutdown stage is another separate part, often handled either in the initd or the process manager. Often, it just hooks to the argv[0] of standard tools like halt, reboot, poweroff, shutdown to execute killall routines, detach mount points, etc.

To stuff everything in the init system, I'd argue, is bad design. One must delegate, whether to auxiliary daemons, shell scripts, configuration syntax (in turn read and processed by daemons) or what have you.

sysvinit is certainly inadequate. The inittab is cryptic and clunky, and runlevels are a needlessly restrictive concept to express what is essentially a named service group that can be isolated/overlayed.

Of course, to start services on socket connections, you either use (x)inetd, or you reimplement a subset or (partial or otherwise) superset of it. There's no way around this, it's choosing to handle more on your own rather than delegate. In systemd's case, they do this to support socket families like AF_NETLINK.

As for systemd being documented, I'd say it's quote mediocre. The manpages proved to be inconsistent and incomplete, and for anyone but an end user or a minimally invested sysadmin, of little use whatsoever. Quantity is nice, but the quality department is lacking.

sysvinit's baroque and arduous shell scripts are not the fault of using shell scripts as a service medium, but have to deal with sysvinit's aforementioned cruft (inittab and runlevels) and the historical lack of any standard modules. BSD init has the latter in the form of /etc/rc.subr, which implements essential functions like rc_cmd and wait_for_pids. Exact functions vary from BSD to BSD, but more often than not, BSD init services are even shorter than systemd services: averaging 3-4 lines of code.

A unified logging sink is nothing novel, it's just that systemd is the first of its kind that gained momentum, but with its own unique set of issues. syslogd and kmsg were still passable, and the former also seamlessly integrated itself with databases.

Once again, changing the execution environment is a separate stage and has multiple ways of being done. Init-agnostic tools that wrap around syscalls are probably my favorite, but YMMV.

As for containers, it's about time Linux caught up to Solaris and FreeBSD.



> The init system runs as PID 1 and strictly speaking, the sole responsibility is to daemonize, reap its children, set the session and process group IDs, and optionally exec the process manager.

The process manager gets killed. How do you recover?

If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?

Perhaps you build in some basic logic for logging. Where do you store the data? What if the user level syslog the user wants you to feed data to can't be brought up yet, because it depends on a file system that is not yet mounted?

There may very well be alternatives to the systemd design, but I've yet to see any that are remotely convincing, in that most of them fail to recognise substantial aspects of why systemd was designed the way it is, and just tear out stuff without proper consideration of the implications.

Most proposed alternative stacks to systemd falls down on the very first question above.

I agree with you that it doesn't seem like a great idea to stuff everything in the init system, but I don't agree that "one must delegate" unless the delegation reduces complexity, and I've not seen any convincing demonstrations that it does.

I'd love it if someone came up with something that provided the capabilities and guarantees that systemd does with indepenent, less coupled component, though.

But there's no way I'm giving up on the capabilities systemd are providing again.


Wait what? What happens if the process manager crashes if you're running non-systemd: you might respawn it but not be able to log the fact that you did so. Worst case, you fail to respawn it and your system crashes.

What happens if the process manager crashes if you're running systemd: the process manager is in PID1 (or, equivalently, in a tightly coupled process that PID1 depends on - because the whole point of your post was that you can never get to a state where PID1 is working but logging isn't working), so your system crashes, every time. How is that better? And if that's really what you want, it's easy to configure a decoupled init system to do that.

Hey, some people like their logs to be sent as email. Maybe we should move sendmail into PID1 as well.


I think the ideas presented in the 's6' init system address most of these issues, I don't know why none of the distributions picked it up as an alternative: http://skarnet.org/software/s6/why.html http://skarnet.org/software/s6/s6-svscan-1.html


It doesn't address the logging issue, as far as I can tell. It appears to rely on the same logging solution as the original daemontools. I used daemontools extensively for a while, and it was great, and I like Bernsteins design philosophy which appears to have been largely carried forwards into s6, but it was simplistic, and suffers from a number of the same problems as a "raw" SysV-init, such as putting us back at the mercy of badly written start/stop scripts, and no dependency management.

If someone could come up with a systemd replacement which manages to keep the systemd features while using a design philosophy more in line with that of Daemontools, that would be fantastic, but it'd end up looking very different to s6. Some stuff could certainly be cleanly layered on top (such as using a wrapper to avoid the start/stop problem using the same method of cgroup containment as systemd). Other things, such as explicit or implicit (via socket activation etc.) dependency management, I'm not so how you'd fit into that model easily.

I'd love it if someone tried, though. It would certainly make it easier to experiment with replacing specific subsets of functionality.


People actually _want_ the logging behavior of systemd? My impression is that it's the most widely hated part; I've heard endless stories of journald thrashing the filesystem forever, losing logs completely on corruption, etc. And even operating properly, its performance is comparable to grepping a flat text log, since despite having a "more efficient" format, it increased the actual data size by something like 4-10x.

Logs are essentially write-once, write-often, read-rarely data. As such, the optimal format is always going to be a flat, append-only file.


Also coredumps don't really belong into the journal, I'd turn that off.


> If someone could come up with a systemd replacement which manages to keep the systemd features while using a design philosophy more in line with that of Daemontools, that would be ...

... in the indicative rather than in the subjunctive, and in fact already mentioned here once. http://homepage.ntlworld.com./jonathan.deboynepollard/Softwa...

> The process manager gets killed. How do you recover?

In nosh terminology, this is the service manager. If it gets killed, the thing that spawned it starts another copy. This could be systemd, if one were running the service manager under systemd. It could be the nosh system manager. Of course, recovery is imperfect. If one designs a system like the nosh package, one makes an engineering tradeoff in the design; the same as one does when one designs a package like systemd. The system manager and the service manager are separate, but the underlying operating system kernel will re-parent orphaned service daemon processes if the service manager dies. One trades the risk of that for the greater separation of the twain, and greater simplicity of the twain. The program that one runs as process #1 is a lot simpler, being concerned only with system state, but there's no recovery in a very rare failure mode. Indeed, the simplicity makes that rarity even greater, if anything. systemd makes the tradeoff differently: there's recovery in a very rare failure mode (which I've yet to see occur in either system outwith me, with superuser privileges, sending signals by hand) at the expense of all of the logic for tracking service states, and for trying to recover them (in circumstances where one knows that the process has failed somehow and might possess corrupted service tracking data), all in that one program that runs as process #1.

> If you have respawn logic for it in PID 1, how do you log information about a failure to respawn the process manager?

In the log that is there for the system manager. See the manual page for system-manager, which explains the details of the (comparatively) small log directory and the (one) logging daemon that is directly controlled by the system-manager, both intended to be dedicated to logging only the stuff that is directly from the system manager and service manager.

> Perhaps you build in some basic logic for logging. Where do you store the data?

In a tmpfs, just like systemd-journald does in the same situation. /run/system-manager/log/ in this particular case. Strictly speaking, this "basic logging" isn't built-in. In theory, it is replaceable with whatever logging program one likes, as the system-manager just spawns a child process running cyclog and that name could be fairly simply made configurable. In practice, difficulties with the C++ runtime library on BSDs being placed on the /usr volume rather than the / volume, and indeed the cyclog program itself living on the /usr volume when it has to be under /usr/local/, have made it necessary to couple more tightly than wanted here, so far. But those problems could go away in the future; if the BSD people were persuaded to put the C++ runtime library in the same place as the C runtime library, for example.

> Most proposed alternative stacks to systemd falls down on the very first question above.

In many ways, that's because it's a poor question that focusses on a very rare circumstance. As I said, I've yet to see either system exhibit this failure mode in real-world use absent my deliberately triggering it. (Nor indeed have I ever seen it occur with upstart or launchd.) Much better questions are ones like "Where are inter-service dependencies and start/stop orderings recorded?", "Is there an XML parser in the program for process #1?", "What makes up a service bundle?", "How do system startup and shutdown operate?", "How does the system cope with service bundles that are on the /var volume when /var hasn't been mounted yet?", "How does the system handle service bundles in /etc when the / volume is read-only?", and "What does the system manager do?". Those are all answered in the package's manual pages and Guide, of course.


Isn't that a pretty narrow corner case? I can count the number of times the process manager has been killed on one hand.


Add enough machines, and "narrow corner cases" happens all the time and at all the wrong moments.

The bigger point is that there are lots of these "narrow corner cases" all over a typical SysV-init setup, not least due to tons of badly written init scripts. The number of times services have failed to start

To produce a systemd alternative, creating something that competes favorably with SysV-init is insufficient. Today you also need to demonstrate how you deal with those corner cases, or why they don't matter - many of us have no intention of going back to the bad old days.


Also you depend every-day on another process that is special in some sense just as the process manager: Xorg. If Xorg dies all your desktop applications die. By your line of reasoning Xorg should be moved into PID 1 too, which is definetely not a good idea.

I don't say that Xorg hasn't crashed, it did rarely when running RC code or proprietary drivers. In fact I probably had as many Xorg crashes as kernel panics, which says something about how stable Xorg is. Still I wouldn't want to run it as PID1, where a crash would really bring down everything.


That is a pretty bizarre argument. I would conclude from init and Xorg rarely crashing that it is possible to write a reasonably stable daemon, and that perhaps it's not a good trade-off to introduce a lot of complexity into those daemons to be able to recover from crashes.


I don't understand how you come to the conclusion that putting Xorg in pid 1 would be even a remotely fitting comparison.

For starters, as an example, I have 100 times as many servers than I have desktops to deal with - for a lot of us Xorg is not an important factor. But the process manager is vital to all of them - server and desktop alike if you want to keep them running. If the process manager fails, it doesn't matter if it wasn't Xorg that took things down.

Secondly, that X clients fail if the server fails is not a good argument for moving Xorg into pid 1 too, because it would not solve anything. If pid 1 crashes, you're out of luck - the best case fallback is to try to trigger a reboot.

Having (at least minimal) process management in pid 1 on the other hand serves the specific purpose of always retaining the ability to respawn required services - including X if needed. (Note that it is certainly not necessary to have as complicated respawn capabilities in pid 1 as Systemd does).

Having Xorg in pid 1 would not serve a comparable purpose at all: if it crashes, the process manager can respawn Xorg. If you then need to respawn X clients, and be able to recover from an Xorg crash, there are a number of ways to achieve that which can work fine as long as your process manager survives, including running the clients under a process manager, and have them interface with X via a solution like Xpra, or write an Xlib replacement to do state tracking in the client and allow for reconnects to the X server.

Desktop recoverability is also a lot less important for most people: Every one of our desktops have a human in front of it when it needs to be usable. Most of them are also rebooted regularly in "controlled" ways. Most applications running on them get restarted regularly. People see my usage as a bit weird when I keep my terminals and browsers open for a month or two at a time.

On the other hand, our servers are in separate data centres and need to be availably 24x7, and many have not been rebooted for years, and outside of Android and various embedded systems, this is where you find most Linux installs.

While we can remote reboot or power cycle most of them, with enough machines there is a substantial likelihood of complications if you reboot or shudder power cycle (last time we lost power to a rack, we lost 8 drives when it was restarted. Even with "just" reboots there is a substantial chance of problems that requires manual intervention to get the server functional again (disk checks running into problems; human error the last time something was updated etc.)

That makes it a big deal to increase the odds of the machines being resilient against becoming totally non-responsive.


I think you raised an interesting point here 'for a lot of us Xorg is not an important factor', I agree. The same could be said about some of the features that systemd provides that cause a lot of flames (binary logs). It has been said before that systemd is monolithic, and this is probably what makes switching so hard.

It is all-or-nothing, whereas if you could gradually replace the old sysvinit/policykit/consolekit/etc. stuff with systemd/logind then problems during that transition could be debugged more easily. You could also choose to not replace some components where the systemd/non-systemd replacement is broken.


   > The author of this piece makes the classic mistake 
   > of equating the init system as the process manager 
   > and process supervisor. 
I think it is a bit more subtle than that. The author makes the mistake of inferring an architecture from observed behavior and fails to ascertain where the warts come from, the architecture or the implementation. They aren't the only one, its a common problem. The result though is kind like playing 'architecture telephone' where each person implements what they think is the architecture implied and ends up with something subtly different than intended. The result is a hodgepodge of features around various parts of the system.

In the interest of full disclosure I must admit I was on duty when AT&T and Sun were creating the unholy love child of System V and BSD, I'm sorry.

The architecture, as bespoke by AT&T system engineers, was that process 1 was a pseudo process which configured the necessary services and devices which were appropriate for an administrator defined level of operation. Aka a 'run level.' I think they would have liked the systemd proposal, but they would no doubt take it completely out of the process space. I am sure they would have wanted it to be some sort of named stream into the inner consciousness of the kernel which could configure the events system so that the desired running configuration was made manifest. They always hated the BSD notion that init was just the first 'shell process' which happened to kick off various processes that made for a multi-user experience.

Originally users were just like init, in that you logged in and everything you did was a subprocess of your original login shell. It was a very elegant system, root's primal shell spawned getty, and getty would spawn a shell for a user when they logged in, everything from that point on would be owned by the user just like everything that happened before was owned by root. The user's login shell logged out and everything they had done got taken down and resources reclaimed. When the root shell (init) logged out all resources got reclaimed and the system halted.

But Linux, like SunOS before it, serves two masters. The server which has some pretty well defined semantics and the "desktop user" which has been influenced a whole bunch by microcomputer operating systems like Windows.

I wasn't the owner of the init requirements document, I think Livsey was, the important thing was that it was written in the context of a bigger systems picture, and frankly systemd doesn't have that same context. I think that is what comes across as confusion.


Getting tired over all the systemd hate. If you don't like it, don't use it. Instead of complaining and making useless-by-design wrappers and/or dumbed-down-versions, why not focus your efforts on making a new better init system and convincing people they should use it instead. systemd isn't final - it's software, and will come and go.

Not to mention, most of the systemd hate seems to be spread by only two main sources now, and both cite each other as sources (ironic a little).[1]

[1] http://www.jupiterbroadcasting.com/66417/systemd-haters-bust...

systemd was really designed with servers in mind, and really does bring a lot to the table for server admins.


The "new better init system" already exists. Several of them, in fact. The only difference? They have no intention of engaging in any shady realpolitik, or consolidating functionality unrelated to their core purpose.

Jupiter Broadcasting are an unreliable source, to say the least. I did watch that episode. When you use such pristine arguments as "Someone reimplemented systemd's D-Bus APIs, therefore systemd is portable!" (much like the Windows API is portable, because Wine exists) and claim that systemd is a "manufactured controversy" while responding to easy straw man arguments, there is a term for that kind of person: a shill.

I was also very amused by the Linux Action Show's coverage of uselessd. They spent the entire time whining about the name, thinking it makes fun of the systemd developers, when in fact it's making fun of ourselves. They also got mad over the use of the word "cruft" and later called us "butthurt BSD users".

Good to see that you bring some new insights, however. Very mature and enlightening.


Quite frankly, if this is the attitude one can expect from the uselessd developers... then I think this conversation is moot.

If a truly better init system already exists, then people who care strongly and/or have very specific use-cases where that init system excels exceptionally, then they will use it. Nobody is married to systemd.

One must also look at how many industry heavyweights are behind systemd now (even Canonical). I'm certain they have considered the pros and cons to systemd much more extensively than all of the armchair quarterbacks appearing in this thread. Perhaps you personally dislike systemd for what you think are good reasons, but know you are in the minority now (you weren't always).

Bottom line -- systemd is targeting servers, everything else is tertiary. Don't like it, then don't use it. But quit using every possible chance to spread needless hate. systemd is not an assault on you personally. No matter how loud you scream -- systemd is not going anywhere for the time being.


You're not even addressing any argument against systemd at all. You're just presenting a consolation:

"Hey, everybody, look at all the people using systemd! They must know better than you, so shut the fuck up and use whatever you want - no one is stopping you! By the way, systemd is meant for servers, even though the developers have never said anything like that and have made it clear that it's meant for all use cases."

In this regard, you are little more than a troll. Or a person who thinks popularity means quality. Both, even.


<quote>systemd was really designed with servers in mind, and really does bring a lot to the table for server admins.</quote>

Which is totally ironic too in that the server-admins hate it. (speaking just for myself here=) )

I am a sysadmin of a medium sized data-center. I am in charge of 100-150 servers at any given point. None of the changes that systemd 'fixes' benefit me or my systems. Boot times? What's the point when it takes 10-minutes for the drive-arrays to spin-up? Logging? I pray a system never dies and I have to access those rotten binary log-files from a live-cd. Network changes/configuration? Nope, every server is configured with static network configs. Power Management? Ha! That's funny. Downtime in minutes costs more then electricity does in a a month.

I could go on. But there is one major caveat: As a laptop user, systemd is fantastic.

As my Debian servers need to and/or get updated and start requiring systemd then I will just migrate them to OpenBSD. This process has already begun.

Systemd is changing things for the wrong group of people. Mobile/Desktop users have alot of wiggle room and areas that need improvement. Server admins need stability; in software, hardware, (script) syntaxes, and interfaces. Users need everything that systemd offers.

I will concede that systemd might be a good fit with Docker, and I am looking into that too; but I guarantee you it will be on it's own box and not homogeneous with the rest of my network.


All of Poettering's projects seems to be lifted straight from OSX.

Ran into a recent interview where he kept referring back to the OSX sound system when talking about Pulseaudio, and Avahi is zeroconf/bonjour. And with Systemd he constantly makes references to Launchd, the OSX "init".

BTW, Red Hat just now announced that the future of the company would be Openstack and the cloud. Fits perfectly with the push for containerization in Systemd.

More and more i get the impression that the "developers" mentioned as benefiting from Systemd are the likes of the Reddit crew. Reddit pretty much could not exist without Amazon's cloud services.

Meaning that for Poettering the future is two things, cloud computing and cloning OSX. And given the number of web monkeys that seems to sport a Mac, i am not surprised at all.

I just wish that they could avoid infecting the rest of the Linux environment...


I realize you were speaking in generalities but to be specific I don't hate systemd. I do dislike "emergent" architectures but that is more of a OCD systems analysis curse I have to deal with.

This statement, "systemd isn't final - it's software, and will come and go.", is the one that most captures my angst. And you can replace 'systemd' with 'linux' or 'gstreamer' or 'webkit' or 'gcc' or 'fsck' for that matter. Not only are they not 'final' but what they would be able to do if they were 'final' is left unspecified. That puts the system on the DAG equivalent of a drunken walk. And users don't seem to like it when their systems are evolving randomly.

I really enjoyed the early RFC process of the IETF because we could argue over what was and was not the responsibility for a protocol, what it had to do and what was optional, and what it would achieve when it was 'done.' Then people compared what they had coded up. When the architecture is the code and the code is the spec, my experience is that sometimes we lose track of where it was we were going in the first place.


To avoid using systemd in practice basically means switching distributions, or switching away from Linux entirely. Depending on your setup, this may be far from trivial.

I think systemd has a lot going for it, and it's been pretty stable on my Arch notebook, but I'm not too thrilled with the way it takes over so many tasks at once and eschews text log files. What's frustrating is that I didn't have much choice in the matter. Yeah, I could switch to another distro, but since Red Hat, Suse, and now Debian and Ubuntu are switching to systemd, that leaves Gentoo or BSD or something. Which are perfectly fine in their own right, but that's pretty drastic if I just want to avoid systemd.


> but since Red Hat, Suse, and now Debian and Ubuntu are switching to systemd

With so many heavyweight linux enterprise companies jumping on systemd, one must wonder what consideration they have given the issue? I'd wager, a lot. Also, note that systemd is really designed with servers in mind, so it's not surprising for a desktop/laptop distro user to find it bothersome (it wasn't designed with your use-case in mind). With that said, the beauty of Arch is you can yank systemd out and go with whatever init system you desire.


RH just announced that their future will be cloud computing (Openstack). I think Ubuntu is following right behind. Suse i can't comment on as i haven't followed that distro in ages. Debian is more of a puzzle, but i suspect it was a case of "don't have the resources to be contrarian".

As for the Systemd design. I Think it started with Poettering drooling over OSX Launchd (his other projects also seem to be straight OSX feature clones), that since then has been hitched on the cloud computing push within RH.

In essence, the kind of server that Systemd seems to favor are cloud computing instances where storage and networking can come and go as the back end gets configured for new needs.

Traditional static big iron and clusters don't really benefit much from the "adaptive" nature of Systemd. If those breaks they usually have a hot reserve taking over while the admins get to work figuring out what broke.


try reading the actual discussion when systemd was being proposed to be used by default. It wasn't because "don't have the resources to be contrarian".


systemd is designed for all use cases in mind. I have yet to see any sentiment that it's specifically for servers, desktops or embedded. Lennart's "Biggest Myths" would have your statement decried as an utter falsehood.


Characterizing criticism as "hate" is fallacious and serves the opposite function of what you wish. People see support of systemd as being just ignorance and whining. If you want to support systemd, then do it with actual arguments.


> The author of this piece makes the classic mistake of equating the init system as the process manager and process supervisor. > To stuff everything in the init system, I'd argue, is bad design.

The author is not making any mistake at all, or no more so than you are.

I'm sure you both value engineering principles like separation of concerns and a single source of truth.

The author believes that by removing the redundancy between initd / xinetd / supervisord / syslog the system is improved.

You disagree, and believe that these are separate concerns.

That's fine, you have different values / judgements in this matter. But saying he's `mistaken` for not agreeing with you is childish.


Well said. I recently migrated to FreeBSD after trying sytemd on Arch and seeing that Debian and Ubuntu are planning to move too.

The dead simple rc.conf file seems so much nicer than the stuff I was dealing with in the entire world of Linux-based systems, like going back to the way Arch used to be when I really liked it.


This. The FreeBSD rc system just works, is well documented and is small enough to understand by one person without too much effort.


As an init system it works fine, but you do end up having to find or invent a bunch of additional stuff if you want similar functionality to what's driving some of the systemd use-cases. The result might still be better (I haven't done a detailed architectural comparison), but you do need something. For example one of the things I find useful about the "systemd way" of things is that it provides, finally, a story about how to apply cgroups to services in a sane way. The kernel provides the APIs, but actually using them from userspace was not fun previously, with multiple incompatible systems, largely based on tangles of shell scripts that had broken corner cases.

With FreeBSD, my impression is that manual shell scripting is still the norm. Integrating RCTL (FreeBSD's resource-limiting facility) with service management basically consists of manually writing in a bunch of imperative calls to RCTL into scripts. There's no way to configure services with limits declaratively, ensure the right thing happens when services are started/stopped, etc., precisely because there's no integration between the RCTL facility and the process-management or init facilities. Or at least I haven't found a way. The closest is that if you need such integration only for jails, you do have the option of third-party "monolithic" management systems, such as CBSD.


RCTL is a stateful database. It's not there yet, but the right solution for managing this declaratively as with anything else on a Unix platform is Ansible/salt/cfengine or something like that, not building those tools into a superset service that manages everything.

I will also add that managing disparate platforms is never a reality from experience. There are perhaps two core platforms at a company and they are migrated together in blocks, all together. For us, we have a couple of legacy Ubuntu machines that are being canned this month. Everything else is Windows 2012 R2 and FreeBSD 10.

The "systemd way" is to provide a monolithic abstraction over many things with a DBus API. It's the equivalent of adding WMI and a registry to a Unix platform i.e. it's against the fundamental tenets of the operating system. Having managed windows systems for years, this is really not something I want to see. Time will tell, but if I'm not right about that then I'll eat all three of my hats.

And yes I have experience with systemd as well through evaluation of RHEL7. Within two hours, I'd hit a wall with timedatectl enabling NTP on the machine. The steps to debug the mess were horrible and the issue eventually just spontaneously disappeared.

That's reminiscent of the stateful nature of windows which brings back many years of pain in the 1990's and 00's for me.


don't you just have to turn off chrony to fix that?


I've been tinkering with NetBSD the last couple of weeks in a VMWare Fusion virtual machine and the RC system it uses it very nice. OpenBSD's is nice as well.


NetBSD can run as a xen dom0 host too though normally I use Alpine Linux/OpenRC as dom0 because it's so small.


>NetBSD can run as a xen dom0 host too

Isn't there a bunch of fine print that goes along with that?

>Alpine Linux

How? The installer doesn't even work.


Are there reasons why Linux couldn't just adopt it?


Feel free. Most people won't, as systemd solves very real problems that people care a great deal about, whether or not you like the way it has solved them.


15 years linux user here, systemd is pushing me hard towards leaving linux. Please tell me what very real problems people care a great dead bout systemd solved by turning log files from text to binary.

Also I care about being able to use my computer and for the first in 15 years a systemd update caused my computer to needlessly dropping into systemd emergency mode at boot and this emergency mode being broken I was effectively locked out of my computer because an optional external usb drive that was defined in fstab with no issue for a couple years now required a nofail option. Now consider that this computer is located in a remote location 1000 km away from where I live.

To me systemd already caused way more very real problems I do care a great deal about than it has solved reducing boot time by a few seconds is not something I care that much about.


For me Linux is pretty dead already because I can't entirely trust the direct it's going in having survived the Unix wars of the 1990s. There are so many parallels to that at the moment, it's not funny. There are large vendors pulling it in separate directions (Canonical, Redhat, Google). At the end of the day, much like back then, customers will suffer from terrible support, fragmentation and political battles.

I just want to get shit done and solve problems and anything that risks that gets outed now.

FreeBSD hits the sweet spot, probably followed by NetBSD.


"There are large vendors pulling it in separate directions (Canonical, Redhat, Google)."

It's pretty clear how that's going to shake out, isn't it? Google is pretty much a non-issue here; yes, Android and ChromeOS use a Linux kernel base, but they have no impact on any mainline distros, and there's no indication Google wants them to. So it reduces down to two parties fighting for control: Canonical and Red Hat. And Red Hat is going to win. Canonical doesn't have the resources to go its own way on more than a handful of fronts (this is why when Debian switched to systemd Upstart was killed off; Canonical is far too reliant on Debian as an upstream to fight every issue), and their requirement for a CLA to accept anyone else's code means they are entirely reliant on their own coders, as nobody wants to sign Canonical's CLAs. We'll see how long they can stick it out on Mir, but they don't have the resources to fight a war with Red Hat on two fronts, so that's the only issue I expect to see them fighting over.


Yes and RedHat is IBM circa 1997 and Canonical is HP circa 1997. The Sun of 1997 is Oracle (again).

Creeping up on their arses is Microsoft (again) with Azure and incredibly cheap commercial offerings.


I've not claimed that Systemd gets everything right. I've claimed it gets enough right enough that a lot of people will be entirely unwilling to give up those advantages and return to something that for many of us is now an inferior solution, just because there are things about Systemd we may not agree with.

For my part, I agree that binary logs was not necessary, though I've yet to encounter any issues with it, and journald certainly does provide a lot of functionality that makes it more pleasant to deal with logs than before. All of that could have been achieved while retaining text logs, though. But at the same time, it is still trivial to log to text files by telling journald to log to syslog if that matters to you.

Other things I do care about include getting rid of init scripts - that is a persistent source of problems. I'm inclined to believe not a single one of them are bug free, though that's probably a bit uncharitable. Unit files helps. So does cgroup containment to rid us of the abomination that is the need to rely on pid-files and hope that works reliably (it doesn't, since pretty much nobody are through enough when writing init scripts). Other things include better recoverability in cases where critical processes gets killed, and well thought out handling of early stage logging. And things like systemd-cgtop and systemd-cgls are nice.

I'm sure we'll eventually get solutions that split more of this functionality out into more cleanly separate components, and that'll be great, but until then I'm happy to stay with systemd.

As for the problems you ran into, that sucks, but any large change like this will have painful teething problems and they're not a good basis for judging whether it's a good long term solution - I've had plenty of boot failures caused by problems with init scripts as well.

Boot time is a long way down the list of benefits for me too - most of our servers have uptimes measured in years, and even my home laptop usually goes a month or two between reboots.


I couldn't agree more. If it become impossible to use linux without systemd then I won't be using linux any more.


There's nothing stopping a Linux distro doing this.... but

It would be a step backwards: it is simpler, and does less stuff, so booting would be slower and some features are missing.


How often do you reboot your kit? Boot time is such a stupid metric even on laptops and stuff where you just suspend/hibernate.

My BSD systems (not front-facing and therefore on a lesser patch cycle) rarely get rebooted and neither do the processes so this is indeed moot for me.

Proof:

http://i.imgur.com/tZsM82Q.png

Yes that's a memcached uptime on a host that has had 10,185,367,932 cache hits...


Suspend/hibernate on free unixes is a nightmare of incomplete support and buggy drivers, so not much of a solution. I'm don't know a single person who has a working laptop suspend/resume setup on FreeBSD (though it's theoretically possible), and it's not usually recommended to rely on it even if you could get it working. Linux has somewhat more complete support, but it's still very hit or miss, and it's common for stuff to be wonky after a resume, even when it does work.


I'll give you that to a degree. It does suck on FreeBSD with my X201. Nothing works but I'm being cheeky now and running it in VirtualBox on top of windows (which I need for other work).

OpenBSD however works wonderfully.


The main question is would it work? For me systemd or applications starting to support systemd only break things (something wrong with policykit/consolekit for example with sysvinit+systemd-shim) that used to work.

Also there are some peculiarities in the way the LSB init script compatibility is implemented in systemd: it tries to be 'smart' and remember their state. So you start an init script, and it failed for some reason / perhaps even exited with an error code, perhaps you are still developing that init script. Now fixing the problem and running the init script / systemctl start doesn't even try to run the script because it thinks it has run it yet. You first have to tell it to stop it (which fails), and only then you can run it again.


Why would booting be slower ?

My BSD systems boot quickly enough for me.


My FreeBSD system has a 30-second timeout during which the entire boot process is halted because it waits for a default route to the internet... which it won't get, because I haven't configured one.

It's pretty dumb, and not enough of a problem for me that I'd figure out how to work around it, but it's a pretty good example.


Just set in rc.conf:

   defaultrouter="NO"
It shouldn't hang.


I don't think this 30-second timeout is a bug in FreeBSD or in rc. You may want your server to wait for the network to become available. Ubuntu Server has the same "waiting for network" timeout:

http://askubuntu.com/questions/63456/waiting-for-network-con...


But the only reason this is necessary is when the boot system isn't smart enough to start whatever can safely be started through proper dependencies.

My experience is that a substantial amount of time is wasted weeding out undesired timeouts in startup scripts, because they lead to increasing downtime.


Slackware uses exactly this system.


The really sad thing is that Arch Linux use to pride itself on being BSD-like. It used a similar rc.conf system. Each service had exactly one init script, not the weird multiple file init script system Debian has. Before systemd, Arch was the closest you could get to BSD simplicity on Linux


Slackware and gentoo have both been closer for a very long time.


I appreciate your work on uselessd. Nothing demonstrates a counterpoint quite like written code. The last thing we need is another ranting systemd blog post.

We always need strong alternatives, even if they face the risk of being taken as simply a political statement, the effects of the statement will be seen in the decision-making down the road.


Yes! What the systemd "discussion" has been missing is viable alternatives that are somewhat comparable in features. Most of the flamewar is focused on people who consider the problems that systemd tries to solve non-issues.

There are some real issues being pointed out (particularly regarding monolithic design) but no-one has attempted to actually fix that in any way (in code, that is).

While it is unlikely that I will end up using uselessd (unless it "wins" in some way, e.g. in embedded space with uclibc and musl), I very much welcome the effort to bring out alternatives that address the same problems as systemd, yet trying to fix some of the issues there are.


There are plenty of viable alternatives: s6, runit, OpenRC, and so on. I'm not really convinced uselessd is a viable alternative - it keeps way too much of the badness of systemd, but I guess that makes it viable if you were willing to consider systemd to begin with.

A much better solution for the problem of user-facing applications (e.g. "desktop environment" software) depending on systemd's public dbus interfaces is to provide a fake service that gives them fake data - the same way you would sandbox Android apps for privacy by giving them a fake Contacts list, etc.

As for the other main "public interface" of systemd that things are starting to depend on, the systemd service file format, it would be easy to add support for this file format to any other process supervision system.


Hey, Rich. First of all, thanks for musl libc.

At the moment, yes. We do keep much of the internal systemd architecture in tact, but we do eventually aim on partially decoupling it, or at the very least expanding the breadth of configure prefixes for tuning its behavior. We are a pretty early stage project, after all.

Indeed, the systembsd and systemd-shim projects are working on the D-Bus interface reimplementation part.

Our goal right now is to be a minimal systemd base that can be plugged in interchangeably and have the vast majority of unit options be respected.

There already are systems that offer primitives to reuse systemd units. nosh is one of them, and there also exist scripts that can convert systemd services to SysV initscripts, and even the opposite (dshimv).


> To stuff everything in the init system, I'd argue, is bad design.

You've slayed the straw man... ;-)

systemd doesn't put everything in pid 1. It defines some mechanisms to orchestrate the whole thing that include pid 1.


Whether it's all in pid 1 or not is irrelevant. What matters is that it has a monolithic architecture, whereby breakage in any one part or their communication channels can bring down the whole system. This is not just a theoretical concern; it has REPEATEDLY happened.


> Whether it's all in pid 1 or not is irrelevant.

All of the existing mechanisms are also a "system" that compromises a ton of processes... If systemd is monolithic on these grounds, then so are they.

> What matters is that it has a monolithic architecture, whereby breakage in any one part or their communication channels can bring down the whole system.

Uh-huh... I think you are speaking to branding more than technology. Keep in mind that systemd is using existing components in much the same fashion they were already being used (hence the accusations about them "absorbing" udev).

If you look at the architecture, it has got very clear points of encapsulation that is much more structured than the loosey gooesy stuff that came before it.

> This is not just a theoretical concern; it has REPEATEDLY happened.

Yeah... with existing systems. There's any number of points of failure that are the stuff of legends in Unix system administration. Obviously, it will take time to get systemd thoroughly cleaned up, but it's not hard to look at the design and see how it provides plumbing to simplify and avoid a whole host of these scenarios.


Systems which do not use systemd simply do not have these problems because there is no analogous component. If syslogd goes down, the worst that happens is you don't get logs. Init doesn't go down because it essentially has no inputs. Individual services can go down if they're poorly written, but they won't bring the system down with them. Traditional systems (the hideousness that is "sysvinit") have plenty of other different problems (e.g. race conditions in process supervision), but deadlock or bringing down the whole system is not one of them.

With systemd on the other hand, all of the components under the systemd banner are tightly interconnected and communicating. In particular pid 1 has ongoing communication with multiple other components, and misbehavior from them can, both in theory and in practice, deadlock the whole system. In case you missed it, this is roughly what "monolithic architecture" means: even though the components are modular, they're designed for use in a tightly interwoven manner that's fragile. It's completely the opposite type of "monolithic" from the kernel, which has everything running in one address space, but with architectural modularity, where interdependency between components is kept fairly low.


> In particular pid 1 has ongoing communication with multiple other components, and misbehavior from them can, both in theory and in practice, deadlock the whole system. In case you missed it, this is roughly what "monolithic architecture" means: even though the components are modular, they're designed for use in a tightly interwoven manner that's fragile.

You mean like how if even one of my SysV init system start up scripts hung indefinitely, all subsequent components would never get started? Or are you referring to how the whole system would hang when the root filesystem device was temporarily unmounted (really fun with network filesystems, although to be fair, NFS implementations eventually became robust enough that this wouldn't be a complete disaster)? Or are you referring to fork bombs those race conditions you mentioned that would bring my system to a complete stand still? Or are you referring to how a race condition with date formatting in syslog actually hung my entire system time and again? Or perhaps you mean how a lot of init scripts had little (if any) retry logic such that you'd often end up without the critical component of your system not running... often in ways where you'd not find out about it or worse still not be able to do anything about it without some really intrusive intervention? Or maybe you are referring to how if you got your init startup order wrong for one of many critical components, you'd have a deadlock before you ever got a chance to actually fail.. Or maybe you're referring to how the right kind of getty failures triggered by a weird byte in a config file could turn your system to a paperweight?

It's so hard to tell which scenario you are referring to. ;-)


> If you look at the architecture, it has got very clear points of encapsulation that is much more structured than the loosey gooesy stuff that came before it.

Then why can't it offer a stable interface that lets me swap out e.g. udev with eudev, like I could before?

That's what makes it monolithic - not the implementation details but the absence of well-defined interfaces between the pieces.


> Then why can't it offer a stable interface that lets me swap out e.g. udev with eudev, like I could before?

I'm not sure it can't.... To the extent it _doesn't_, I imagine it is not much of a priority, since eudev is a fork from udev, and is lacking the enhancements to udev the systemd project has been working on.


From experience with Linux init scripts, I'm far less concerned about systemd than SysV-init style boot processes, to be honest. I lost track of the number of boot issues related to poorly written init scripts I've dealt with many years ago.


I have an anecdote that occurred a short while while ago. We had a server with several database instances (with their own init scripts) running on it.

The scripts were buggy in such a way that starting the database would bring it up okay, but prevent the rest of the instances from starting. Also, using the "stop" directive would successfully stop the database... and all the others, as well.

The bug probably occurred because the init scripts were horrible to begin with and had been copied (ugh) to accommodate more instances, without the necessary modifications to not screw things up.


Sounds familiar..

One of my "favourite" problems with init scripts for service stop/start is that way too many of them basically throws their hands up if the contents of the pid-file doesn't match what it expects. Never mind that 90% of the time when I want to actually run stop/start/restart, it is because something has crashed or is misbehaving, and there's a high likelihood the pid file does not reflect reality.

So a far too common scenario is: Process dies. Tries to run "start". Nothing happens, because the pid-file exists and the script doesn't verify that the pid actually matches a running process (or it checks that it matches a running process, but not that the process with that pid is actually what we want).

Ok, so we try "restart" or "stop". Gets an error, because the pid-file content does not match a running process, and rather than then cleaning out the pid file and starting the process, it just bails.

Basically I don't trust init scripts from anyone but distro maintainers themselves, and even then there are often plenty of edge cases that cause problems.

Regardless of systemd, I really like the systemd solution to this of using cgroups to ensure it can keep proper track of exactly which processes belongs to a service without resorting to brittle pid-files which seem to rarely be properly implemented. Of course that cgroups approach could be implemented as a separate tool, but pid-files badly needs to die.


I wasn't talking about systemd, in particular. I was using a hypothetical example to counter the OP's point. That said, systemd's main.c still does have significantly more baggage than most other systems I've seen (never looked into Solaris SMF internals, for instance).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: