I was saddened to read the release notes that state:
"OpenSSH is a 100% complete SSH protocol 2.0 implementation..."
This bug[0] calling out the fact that OpenSSH doesn't implement section 6.9 of RFC 4254 (which allows you to send signals to remote processes) has been open since 2008, complete with community submitted patches that implement that part of the protocol.
My recent pet Golang project[1] is a parallel remote command executor that uses OpenSSH, and I would really love the ability to better manage remote processes I execute via SSH.
I mostly jumped to the bottom of the bug discussion thread there, but there doesn't seem to be much opposition to adding this, just comments deferring this to other releases. Are they just requesting that patch submitters do some extra work to get it in there?
I think they've been wanting to get this patch into OpenSSH for years, but for whatever reasons it just never happened. I assumed it was a community patch, but it was actually written by Darren Tucker, who is an OpenSSH dev.
There was some conversation about if it was appropriate to pass on certain signals, but that's about it. After Darren cleaned up some client UI issues that bugged him, he even suggested that we might see the patch in 5.4.
Openssh is one of the pieces of software that I would argue needs to be kept up to the latest and greatest regardless of the original version shipped by the distro.
I would similarly push for latest version of OpenSSL but that's harder to get right.
Right now distros who backport, such as Debian, need to very carefully read changelogs and decide what and what not to backport. Quite a daunting task.
I get that. My argument is for a small subset of software tools, to be considered to be sufficiently critical that updates are prioritized because compromise of said tools would have outsized consequences.
> That's not really the sort of decision application programmers should be making for sysadmins.
As a programmer you have the right (or maybe even obligation?) to write secure software and I would argue software that's hard or impossible to use insecurely. It should live up to the standards of the time of release, not the time of the release of the first version (in case of OpenSSH that would be more than seventeen years ago).
As a sysadmin you can always decide to stick with an old version if that is what the environment you operate in demands.
I think this proactive mentality of OpenSSH is an important part of their success and why it has such a good track record from a security point of view.
I disagree with bandrami in this case, but I don't think this is quite right either because you focused on an instance-specific goal rather then universal principle:
>As a programmer you have the right (or maybe even obligation?) to write secure software and I would argue software that's hard or impossible to use insecurely.
There is plenty of software where secure usage is not a concern and that's fine. Rather, it would be better to say that as programmers we have the job to ensure that our software is as fit for primary expected purpose as possible, and in particularly lacks any surprising gotchas. Sometimes within a given program's core purpose there are decisions that can only be properly made as part of deployment/usage and those are appropriately left to the sysadmin/user, but if something is directly contrary to core purpose then it's always worth questioning whether it needs to change.
In the case of OpenSSH in particular the core purpose is in fact secure links. We've all long had an insecure very fast virtual terminal system if we wanted it and it's called Telnet. There is no reason that any available built-in mode of OpenSSH crypto should ever be insecure. Asking to have obsolete methods generally considered to no longer be reliable to be "left up to the sysadmin" would be like asking it to have rot13 as a sysadmin option: completely contrary to the purpose and expected function of the program. Not just in security but in software in general any extra switches carry both developmental load (more code to go wrong), deployment load (more possibilities to make mistakes) and cognitive load, so they should always be considered to have inherent negative value and then asked to justify themselves, not considered to stick around forever by default.
Deprecating ciphers I'm fine with. Telling me a minimum required key size isn't, because they have no idea what the window of security I'm looking for is. If I need to keep a text secure for 2.5 seconds, a short key is fine, and for that matter a longer key gets logistically problematic.
Since I bump back and fourth between sysadmin and programmer gigs, I don't get why a system admin wouldn't want a programmer to build their software with privilege separation? What am I missing? Having a secure system is a big deal for a system admin (these days the #1 deal), but at some point you have to rely on a programmer getting it right.
Privsep is great; I use it whenever I can. I also have tiny embedded systems that don't support it, as well as containerized systems that don't need it.
You've just demonstrated that you don't understand either the goals or mechanism of privilege separation: 1) it doesn't require root and 2) it protects more than the root account.
It's good that you have the source code so you can re-add/maintain the features you need, or can backport fixes to previous versions that have them implemented.
These changes are very good for the vast majority of users as they removes two big opportunities to silently shoot yourself in the foot.
In principle I agree. Looking at the security patches there seems to be a few issues around this feature. My guess is that this feature is entering the "disable before retire" phase of its life.
Their TLS configuration is very strict. Only TLS 1.2 with only ECDHE + ChaCha20 or AES-GCM cipher suites. A lot of clients can't talk to it (including IE 11 on Windows 7 and 8), so it makes sense that they still offer non-encrypted connections: https://www.ssllabs.com/ssltest/analyze.html?d=openssh.com.
HTTPS is about more than just confidentiality (encryption), though. There's also authentication (the party serving the traffic to you is the party you asked for) and integrity (the traffic has not been modified since it was sent).
This prevents all kinds of undesirable behaviour; like your ISP injecting JavaScript ads into the webpages you view.
Confidentiality is just a nice little side-benefit.
But their server should also redirect normal users.
EDIT: ah no, different computers. On my personal computers I have it, not on my workplace computer. (and my point remains, they should redirect traffic)
No, they should either serve the URL requested or refuse to do so. There are networks that do not allow TLS traffic, and clients that do not support it.
Who needs this? I have a ssh server on my ubuntu, do I need to update OpenSSH? Also, I have openssl installed (for some reason), is that the same thing?
The ssh server on your Ubuntu machine is probably OpenSSH. If you're keeping your machine up to date with Ubuntu's security updates then you can let them take care of it.
OpenSSL is separate software, but it provides a cryptography library that OpenSSH (usually) uses. You will need it installed.
"OpenSSH is a 100% complete SSH protocol 2.0 implementation..."
This bug[0] calling out the fact that OpenSSH doesn't implement section 6.9 of RFC 4254 (which allows you to send signals to remote processes) has been open since 2008, complete with community submitted patches that implement that part of the protocol.
My recent pet Golang project[1] is a parallel remote command executor that uses OpenSSH, and I would really love the ability to better manage remote processes I execute via SSH.
[0]: https://bugzilla.mindrot.org/show_bug.cgi?id=1424
[1]: https://github.com/spudlyo/metassh