Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

This is a dumb idea unless CAs becomes automatic and free or are completely replaced by something better.

The reason why HTTPS isn't used more is because it's a major hassle and it's quite expensive (can easily double the yearly cost for smaller sites).

If using HTTP 2.0 requires buying SSL certificates, the smaller sites currently not using SSL will just be stuck on HTTP 1.1 forever.



The encryption of the transport and the verification of the identity of the server should be more disconnected.

The CA part is only to verify that the machine you are talking to is who it says it is.... in reality all this really verifies is that they filled out a form and there is probably a money trial somewhere leading to a real entity.

But I've never understood why the identity is so tied to the transport security. It would make everyone's life easier if HTTPS could be used WITHOUT identity verification (for 90% of cases this would make the world a better place)

We'd still want to purchase third-party identify verification... and browsers would still present this in the same way ("green padlocks" etc)... but even without verification every connection could be encrypted uniquely, and it would raise the bar considerably for a great number of sniffing-based issues would it not?

EDIT: I guess what I'm saying is a social issue: We've put so much effort into telling everyone that "HTTPS is totally secure", that we've left no room for a system that is "Mostly secure, unless someone has compromised the remote system/network" .... maybe it's too late to teach everyone the difference between "encrypting a letter" and "verifiying that the person you give the letter to is who they say they are"


I'm sitting here still trying to think of a way to prevent MITM attacks if you have no idea who the guy is on the other side... Maybe I need to drink more tea this early in the morning?

I guess if you did something weird like flip the protocol upside down such that all people would have "internet licenses" and enter them into the browser they're using at that moment (or better yet, lets charge each user $50/yr PER BROWSER LICENSE) and it became the sites problem to encrypt to the end users key... One way or another I think you have to verify the identity of at least one side WRT MITM?


So you don't prevent MITM attacks...it's still a step up from cleartext.

All this change is meant to ensure is that all HTTP/2.0 traffic is encrypted, not that it is all authenticated. For authenticated communication, we continue to have what HTTPS is today.

The main issue is retraining people to not think that "https" means "safe". That's something that browsers are already good at, however, because there is already a world of difference between the user experiences of visiting a site with a trusted cert and visiting a site with an untrusted cert.


It's not a meaningful step up from cleartext, because a passive attacker can become an active attacker with just a couple spoofed packets or control of a single DNS resolver.


It is a meaningful step up, because the passive attack is entirely risk free for the attacker, while an active attack carries with it the risk of detection.

The practicality of enormous secret drag-net operations like the NSA has been running would decrease dramatically if TLS had been the norm rather than the exception, even with unverified certificates. You can't opportunistically MITM 100% of connections without somebody noticing.

It is a shame that cleartext connections have to be the out-of-the-box default in every web server. Security should be the default, and I think the CA mess is to blame for that not being the case.

The sane thing to do would be generating a random self-signed certificate if the administrator didn't set up any explicit certificates. That would prevent passive attacks, and can be built on top of with technologies like certificate pinning and Convergence to mitigate active attacks.


This appears to be a comment that seriously suggests removing authentication from TLS as a countermeasure to NSA surveillance.


Not entirely. I kind of agree. I think I even posted a nearly identical comment once.

It would be nice if there were two kinds of connections. Encrypted unauth and encrypted auth. That seems strictly better than offering unencrypted connections. Your browser could still display the "wide open to attack" icon for encrypted unauth if you like.


Why?!

The entire reason people think TLS should have "unauthenticated encryption" (which is in the literature kind of an oxymoron) is that they don't like the SSL CAs.

I don't like them either.

But the SSL CAs are a UI issue. Instead of dinking around with the browser UI to make worthless "unauthenticated encryption" sessions appear different, why not just build a UI for TACK, let people sign their own certificates if they want, but then pin them in browsers so those sites can rely on key continuity instead of nothing.

Five years from now, if TACK pinning becomes the norm, I think it's a safe prediction that basic TLS certificates will be free from multiple vendors. Moreover, the corruption of the CA system will matter less, because unauthorized certificates will violate pins and be detected; the CAs that issue them can be evicted.

While we're at it, why don't we just fix the UI for managing CA roots? It hasn't changed since the mid-1990s.

I am baffled by why anyone would actively promote an insecure cryptosystem as a cure for UI problems, or even as an alternative for some entirely new cryptosystem like MinimaLT.


It's just a matter of what can be done today vs tomorrow vs next year.


All of these things are simply gated on browser vendors. That's the overhead. Why would you push for a new UI for insecure connections when you could instead lobby for TACK?


Of course not! I am suggesting that connections should be encrypted by default, whether the endpoints can be authenticated or not.


That's still a step up. Now you need to be an active attacker, and not just a passive one.

The perfect is the enemy of the better.


The worse is the enemy of the better too.


It's easy to decide to avoid the worse. But deciding on the tradeoff for the better vs the perfect is much harder.


This tradeoff is easy. UI that makes unauthenticated connections easier to accept is a bad idea; UI that makes certificate pinning work better is a good idea. Suggested tradeoff: pursue good idea, avoid bad idea.


> All this change is meant to ensure is that all HTTP/2.0 traffic is encrypted, not that it is all authenticated.

This is a perfect example of <strikethrough>"good enough is the enemy of good"</strikethrough> "not completely broken in every possible way is the enemy of barely good enough" that is so prevalent in web security. If we don't use this chance we have now to secure internet traffic we will continue to be completely vulnerable to rogue WiFi AP like http://www.troyhunt.com/2013/04/the-beginners-guide-to-break... and to companies as well as countries snooping their employers/citizens traffic via huge proxies for years to come.


The "guy on the other side" is the fridge you bought at the store and just installed in your house.

You want to connect to it securely, but the fridge really has no way to prove to you who it is through any kind of third-channel.

Hell, forget about fridges. It's the router you just got at Best Buy.


> I'm sitting here still trying to think of a way to prevent MITM attacks if you have no idea who the guy is on the other side... Maybe I need to drink more tea this early in the morning?

It's not that you don't know who the guy is, you just don't rely on a 3rd party to tell you that. See how SSH fingerprinting works.


SSH keys work exactly like self-signed certificates. On first connection you get the "whoah, this isn't trusted, do you want to proceed" warning, and if you accept, you are not warned in the future unless the key changes.

If browsers would make it easier to "permanently accept" a self-signed certificate (right now it's usually a multi-step process with blood-red warning messages at every step) we'd have the same situation as SSH keys.


HTTPS involves multiple servers that are interchanged regularly and commonly use different keys. The keys change regularly. There'd be no way to know if the key changing is part of regular operation or a man in the middle.


Note that there's a difference between SSH fingerprints and (self-signed) SSL certificates. Multiple servers can easily share a certificate.


It would be unwise to do so. In any case, phasing out old servers and certs for new ones is common practice. Using one cert everywhere (which would mean using one set of keys everywhere, which is a horrible idea) would require more downtime for maintenance of the certificates. It's not gonna happen.


> See how SSH fingerprinting works.

It doesn't, as a MITM prevention technique, for a gullible population. It doesn't even work for non-gullible population that's been trained to always hit "Y" on first connection from a host to a new server... err... I mean first connection to a MITM who then talks to the new server for you.

There are ways to make the situation slightly harder like the extremely unpopular idea of putting SSH host keys in DNS and then securing DNS ... err .. probably securing DNS via a CA type backend.... Well even unsecured DNS holding SSH host keys is better than nothing, or at least it makes people more susceptible because they feel safer, or something like that.


The odds of encountering a MITM attack on your first connection to a new server are low.

If it does happen, then the attacker will have to keep doing it forever, or else you'll get a warning the moment you manage to connect directly to the site without the MITM.

If your first connection is direct, then you're safe from MITM forever. If your first connection is compromised, then at least you'll likely discover that fact quickly.

I think this qualifies as "works".


This seems similar to the logic behind TOR entry guards: https://www.torproject.org/docs/faq#EntryGuards


I think the point is that most users wouldn't notice having to press "Y" again to accept a new fingerprint.


ssh only gives you the "y/n" choice the first time you connect. If you've connected before but the key has changed, it throws up a very nasty warning and does not even give you the choice to continue. You have to manually edit your key file to remove the offending entry if you want to start using the new key.


A fence doesn't stop someone with a ladder, but that doesn't mean fences are a bad idea.


I couldn't agree more.

It is much more secure to visit a site with a self signed certificate than to visit the site over http. And yet, browsers start flashing red when you do that. At the least, they should show red on http, yellow on self signed https, and green on trusted https.


I agree.

One actual use case that could be solved in a better way than today would be login portals where a user have to be logged in to access the Internet.

Today, this is typically solved by issuing a redirect of some kind to the client (in the future, I guess it will receive a 511).

For HTTPS, the choices are a: Dropping the packets, ensuring extra costs in the support organizations when users wonder why their internet doesn't work. b: Doing a MITM and issue a redirect to the login portal that way.

Different operators choose different solutions here. Neither choice is good. Having a way of telling the client, that yes, while the connection is still encrypted, it didn't end up at the place it expected.

Might it be possible to add to TLS so that there are some way of issuing gateway redirect? Perhaps, perhaps not. I've seen precious little action in that area.


Surely there must be a solid reason why endpoint authentication and transport encryption must be extricably combined into one program.

Personally, I would find great use for the authentication function of OpenSSH as a separate program. In other words, a program that does one thing only: it verifies an endpoint is who it says it is.


The short answer is that at the time this stuff was designed, it was assumed that a passive attack could trivially become active instead, so it was assumed that defending against passive attacks wasn't worthwhile.

Newer info about fiber splitters invalidate that assumption.


I recently bought a PositiveSSL certificate for less than $3 per year at gogetssl.com. That's less than one third of what I paid for the domain to use it on. If you can afford a domain, you can afford to put a SSL certificate on it.

Low-cost SSL brands like PositiveSSL and RapidSSL are so cheap nowadays, some registrars hand them out for free if you buy a domain. And they're compatible with every version of IE on Windows XP, unlike those free certs from StartSSL.


The cost usually isn't so much the cost of the cert, it's more the cost of the static IP.


What browser would support HTTP 2.0 but not SNI?


Lots of utilities that aren't conventional "browsers" but talk HTTP.


The question is still completely valid. What tool would support HTTP 2 but not SNI?


I'd not heard of SNI before. Is this something that can be used now?!


Yes ... as long as you don't need to support IE on XP, Android 2.x, or Java 6.

https://en.wikipedia.org/wiki/Server_Name_Indication


> IE on XP, Android 2.x

That's still a lot of devices.


Neither of those browsers supports HTTP/2.0, so that's moot.


> IE on XP

In this case, that's also going away hard next year when Microsoft discontinues support for Windows XP. At that point it'd be really tempting to suggest switching to Firefox or Chrome, both of which do support SNI.


SNI is useful for hosting, but I don't think it helps embedded devices. Is any CA willing to issue me a cert for 192.168.0.1? Wait, don't answer that.


Why do you want to use global CAs for internal services? Wouldn't it be better to use your own CA? I find out that identifying site by it's certfingerprint is much stronger authentication than the fact that it got valid cert. Actually it would be a good idea not to trust any other than company's internal CA for internal services. But as far as I know, bowsers aren't up to this challenge. Maybe AD allows this, but I haven't ever seen any post how to do it.


It'd be more interesting to see if a CA would issue a cert for something.local — sadly, you're probably right to fear the worst…


They will -- but I believe that's to be phased out by 2015 or so.


You can solve this by setting up your own Certificate Authority.


If we'd get to it and get IPv6 up, the business of selling static IPs should become a very unprofitable as there would be a virtually unlimited supply of IPs. Why is this not happening?!?


For the same reason that SSL adoption is currently lower than ideal, for many uses the increased cost (actual cost, and cost of time) is not perceived to be worth it. For many/most uses IPv4 works just fine and non-SSL is just fine.


Don't forget the cost and barrier to entry of setting up the cert and SSL and learning to administer the extra steps well, without introducing more holes through complexity.


IPv6 is free as in 4 billion ^ 4 addresses free.


Presuming you're noting exponentiation with ^, 4 billion ^ 4 billion addresses would mean 128 gigabits per address. IPv6 addresses don't take up half a gigabyte each in any sane encoding.

IPv6 has 128-bit addresses, which works out to about 4 billion ^ 4 addresses, not 4 billion ^ 4 billion addresses.


My original comment said 4 billion ^ 4 addresses, as in 2^128. There is no second "billion" in that line.


What about embedded devices.

Not everything on the web is sitting on a well-known server.


Right - embedded devices, one off toy apps, a lot of internal organization pages, and a lot of hobbiest projects make up a huge part of the "web space". These will all suffer for more reasons than cost of certs - it adds a new hurdle and a barrier for entry.

Think about printers for a moment: now all the printers providing http interfaces need to include a way to install an organizational cert on them (at least for a lot of organizations). That means that there needs to be an out of band step in setup (and maintenance) to add the cert, or a way to do so from an http interface. The later just screams "giant security risk" for a dozen reasons.


I am sure it happens but you should not be exposing your printer to the Internet. That is just asking for trouble. You would not need HTTPS on an internal network.


> You would not need HTTPS on an internal network.

Oh, really?

http://www.washingtonpost.com/world/national-security/nsa-in...


But HTTP 2 requires it, no matter if you need it or not.


No, it doesn't. From the article: “To be clear - we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption.”


The reason for your parent comment (and my initial misunderstanding) was because this post title was submitted as "HTTP 2.0 to be HTTPS only". By the time I refreshed the title was changed, but this is why we need to stop modifying original article titles in order to bait more views.


So you require a cert for personal projects. That doesn't mean a cert that chains to a public trust. You could easily cut your own cert and trust it on whatever device you wish to access the site on


And for e.g. intranet usage the organisation could set up their own internal CA to validate TLS certificates. The root certificate could be distributed in a manner suitable to the organisation. E.g. via Group Policy for Windows clients, or by simply including it in the disk image used for setting up new machines.


Sure, but there are many new (and not-so-new) "internet of things" devices that explicitly _do_ want to be able to connect to the internet - and a great deal of additional value derives from that ability.

I've spent a lot of time recently working out how to securely allow a set of christmas tree lights with an embedded linux controller[1] with wifi connect via OAuth to your Twitter or Facebook account while being controlled from your phone. The lack of workable/affordable ways to have SSL keys on the device that your phone will trust makes life _very_ interesting - and getting the password-equivalent OAuth tokens into the device has been a fun challenge.

[1] Gratuitous self promotion - http://moorescloud.com/ go pre-order one now to justify getting UL certification so we can sell 'em in North America! _Please!_ ;-)


> You would not need HTTPS on an internal network.

This is false. Good security is layered security.


sounds like devices that wouldnt make the switch to http 2 anyway


So your argument is lets create a new version of a protocol, but make it less capable than the version which proceeds it, so that there are very valid use-cases which cannot be solved using the new protocol-version, forcing us to rely on multiple protocol-versions for what should be the same thing?

How on earth do you make such an argument make sense?


Did you mean to send that reply to me?


And what about wildcard certificates?


The good ol' Subdomain-versus-Subfolder debate just gets a bit more expensive on the left side, that's all.

Services that really need an unlimited number of subdomains are a tiny minority, and market prices reflect this. For the time being, someone like WordPress.com can probably afford $60-$100/year for a wildcard certificate. Everyone else just sticks to subfolders like Twitter does.

After all, nobody will be preventing you from running a website. Your priorities and economic circumstances might prevent you from using pretty subdomains, but that's no different from the current reality where short and memorable dot-com domains cost thousands of dollars.


And they're compatible with every version of IE on Windows XP, unlike those free certs from StartSSL.

This matter has nothing to do with the version of IE and everything to do with whether Windows root cert update is turned on.


Domain names at that price should include an SSL certificate.


The first proposal doesn't require you to buy a certificate, see: http://tools.ietf.org/html/draft-nottingham-http2-encryption...

With that http will be encrypted with no certificate check and https will still have the good 'ol check.


The irony is that in that situation http with ssl and a randomly generated cert will be more secure than HTTPS which uses the CA's Cert, hell I'd like the HTTPS to use the CA's cert for identity but use a self-signed cert for actual data transfers.

CA's are a single point of failure for security.


Don't worry, you just don't understand how TLS works :-)

The CA never gets the private key. Instead they get a certificate signing request (CSR), which only contains the public key part. They sign that.

Oh, and then there is perfect forward secrecy, which basically means that even the servers private key is not the one used to encrypt the actual data (after the initial handshaking, and only for suitable cipher suites, subject to downgrade attacks).

Disclaimer: at least, thats how its properly done. Some CAs offer a "send us your cert and we'll sign it", and dumb people who shouldn't be admins use it because it's (slightly) easier to use.

But you got the conclusion right, the notion of CAs is problematic.


"dumb people who shouldn't be admins"

This is what kills CA security. Anyone at a employer with over 5 people in the IT dept probably has someone who can insert a CDROM but has no idea how to set up CA and SSL stuff installing intranet internal servers using https and a self signed cert.

So we're carefully raising a whole generation of users programmed to accept any self signed cert, after all "thats how the benefits website is at work" or "thats how the source code mgmt site is at work". Then they go home, and oddly enough their bank presents a new self signed cert, or at least they think its their bank, and much as they have to click thru 10 times a day at work, they click thru the same popup at home and then enter their uname pword and ...

Paradoxically as a budget weapon its excellent because you probably have good enough physical security at work and frankly its usually not something worth protecting anyway, but it is incredibly annoying so you can bring up at budget meetings that IT can't afford to fix the SSL cert errors on some meaningless server because they can't afford it, etc. Not technically true but J Random MBA managing something he knows nothing about, can't figure it out, so its a great budget weapon. Highly annoying but doesn't really hurt anything.

To fix this you'd need something like an enterprise programers union standard union contract rule that enterprise programmers will never, ever, ship enterprise software that allows a self signed key. Good luck defining enterprise software, I suppose.

And in the spirit of idiot proofing leads to better idiots, requiring no self signed keys means idiots will create their own root and train users to import any root they ever see anytime they see one. Then distribute a non-self signed key signed by the imaginary "Innitech CA services" root. What could possibly go wrong with training users to do that?


For internal websites, be your own CA and distribute the cert via AD (or include it in your OS image, or whatever).


In the spirit of "idiot proofing leads to better idiots" of course that will not happen.

In fairness if you have a heterogeneous network of legacy windows, some macs for real work, legacy blackberry and both real smartphones, distributing it "everywhere" can get kinda hard.


except that the CA or CA hacker can impersonate you, thus it's still one of the multiple single points of failure


Yes, but they can also do so if you use a self-signed certificate, by just self-signing their own. There's no way that's less secure than a CA-signed cert.


As far as I know, self-signed certs have to be approved on a case-by-case basis in most browsers. Thus if a site is hit by MITM, the cert will change and the browser will warn. Of course, that's assuming you've visited the site before and care to pay attention to the warning.


Besides geococcyxc's remark, how are you to know that the first certificate is legitimate? How are you to know that the new certificate after the old one has expired is legitimate?

If you want pinning, there are better solutions: http://patrol.psyced.org/


Care to elaborate? I do not think you will get a warning if the MITM is done with a certificate signed by a valid CA, even if you have approved some self-signed certificate before for that site. At least I have never seen this in any browser.


You'll be protected against NSA-style snoop-everything passive attacks.

CAs will always be able to MITM you. Like I said: "the notion of CAs is problematic."

There are two caveats:

1) certificate pinning: your browser has a hard-coded list of certificates for all major websites (e.g. Chromium: https://code.google.com/p/chromium/codesearch#chromium/src/n... (scroll down!))

2) there are add-ons (ie Certificate Patrol) that warn you when the certificate changes


One service I can recommend is https://www.startssl.com/ (no affiliation, just a customer) who offer free certs to individuals , and cheaper certs to businesses (their prices on wildcard certs and multiple domain certs are they best I've seen online $59).


As an individual, can I just say -- don't ever mess up, lose your key or need to regenerate your certificate before the expiry date.

Live within their means, or it will cost you $25 (because of "revocation costs").

StartCom are pretty awesome, but be aware of potential pitfalls.


I had to revoke my wildcard cert a few weeks ago. You can tell them why you did that. As far I know they decide to charge you on a case by case basis. When I revoked my cert I got an email 3 minutes later saying: "Revoked free of charge".


Not my experience at all.

To quote them:

"Class 1 certificates aren't revoked free because we receive too many requests daily (specially for the Class 1 free certs) and would we have revoked them all, our certificate revocation list (CRL) would have been blown out of every proportion."

In a further back-and-forth, the admin proceeded to tell me how much bandwidth I would cause them (I don't even care about being added to a CRL for a personal domain).

Edit: Sorry, you did say a wildcard cert, which sounds like a paid cert, so would offer more "service" I'm guessing.


Their verification service is annoyingly rigid. Anything other than a phone call to a number listed on a phone bill (and no fair blacking out other numbers on a family plan, for instance) or waiting a couple of weeks for a letter from Israel is rejected, even when the information is easily verified using online government databases[1].

1 - Not an NSA joke, more that "hey, voter registration and property tax rolls are public and online; you could just verify that, no?"


Here's a suggestion:

http:// is encrypted but performs NO certificate check.

https:// is encrypted but performs a certificate check.

Done.


Did you read the linked message?


I did, but didn't pick up that's what was meant. So the accidental tl;dr above was helpful.


As a client, it's easy to be your own CA. Then you can just obtain the remote server certificates you need and, if you trust those endpoints, sign them yourself.

The problem is how to get those certificates and be sure they are the right ones. The problem was already solved in the pre-Internet world: letterhead, signatures and postal mail.

Trustworthy commercial entities could really distribute their certificates in paper form (or maybe on a business card) as an official document. Customers then scan these into their computing devices and, if they choose, sign them.

I doubt that anyone is pushing HTTPS based on the authentication function. It is the need for encryption that is probably the impetus.


the smaller sites currently not using SSL will just be stuck on HTTP 1.1 forever

Is it a problem?


Saying that you can only use this new technology if you are willing to hand out $100 a year to some third party provider which you really don't see the need for is a major hindrance to adoption.

Imagine if every former "free" technology (tcp/ip, email, http, c-compiler, whatever) demanded you pay $100 annually to use it. How many hackers creating things which we now take for granted do you think would have been discouraged from doing just so?

Security is nice, but that doesn't mean it's worth it or required all over, at any cost. De-facto requiring a paid "license" to operate on the internet is not the right way to go.


What are you running your web servers on? How do you connect these web servers to the internet? How do you register your domain?

I would assume that you pay more than $100/y (which is expensive for a domain validated SSL cert, btw) unless you're using a free hosting provider at which point it's not your decision what protocol to use anyways.


I have 10mbps upstream connection. I have a Raspberry pi and a Linux-powered NAS. I have lots of equipment which costs me nothing to use, which I can use to host or create new internet services. And I do.

No, it wont let me run a multi-million user site, but that's not my aim, nor should it be needed to let people new at the game fool around.

Putting the bar higher and higher to just being able to fool around is so utterly the wrong way to go.

I wonder if anyone on this site remembers what is was like to be 8 years old and already being able to program your first program on your TRS-80 or Commodore 64.

No money needing to be spent, no need to seek permission. Just hack. Get immediate, direct feedback. Instantly gratifying. That appraoch gave us a generation of computer-professionals unlike any other. Why are we so eager to put the road-blocks on now?


> I have 10mbps upstream connection. I have a Raspberry pi and a Linux-powered NAS.

that means you pay for your internet connection and for the hardware you run the website on. Why is also paying for a certificate a problem?

I get the hobbyists approach, but especially for hobbyists I think it's better to stay with HTTP/1.1 which, as a plain text protocol is a lot easier to learn than the complicated ugly mess of HTTP/2.0. Also because of the SSL requirement, development will probably never happen over HTTP/2.0 - or do you want to create or even purchase new certificates for all your development projects?

A HTTP/1.1 server is something normal people can implement.

A HTTP/2.0 server is something for others to implement and a pain to debug.

I see HTTP/2.0 as a new transport protocol to transmit what we know as HTTP/1.1. None of the request/response semantics has changed between 1.1 and 2.0 (minus server push, but if you want to support older clients, you'd have to use other techniques anyways).

If you're just running your own little page, nothing is stopping you from using HTTP/1.1. Once your site is big enough to actually benefit significantly from HTTP/2.0, you will have the money for a certificate.

It's the latency for your clients you can shrink with 2.0, but you'd get bigger benefits from moving off hosting on a cable modem than by moving to 2.0. At that point, you'll have other, bigger costs to pay than the certificate.


> that means you pay for your internet connection and for the hardware you run the website on. Why is also paying for a certificate a problem?

Because the internet access and the hardware would have been bought regardless of the activity, for other purposes. (S)he was already paying for it, and used it for other purposes. Running a web site _happens_ to be one of its use, but is not the one goal. The fact that it is free to run a web site means that I can run my website on my laptop.

On the other hand, the certificate would have to be bought _only_ for this, because it is its only use: be able to play the HTTP/2.0 game.


TCP stacks long ago became something for other people to implement and hard to debug. Same with most encryption. There will always be a hobbiest path with http/1 but the biggest sites in the world are building http/2 for their use cases.

The internet is no longer predominantly a hobbiests' playground and hasn't been in some time. Mainstream success leads to this sort of transformation by definition.


Get immediate, direct feedback. Instantly gratifying. That appraoch gave us a generation of computer-professionals unlike any other.

Are you kidding? Just a few decades ago you would need to pay a lot for machine time to be able to use computer. Today you can sit near a cafe with a $200 netbook and have a free internet access. In the 90's a .com domain cost ~$100. Just a few years ago TLS certificate cost close to the hundred, today you can buy one at around $10 per year or sometimes get it for free.

You have 10mbps connection! At your home! How much do you pay for it?


Get over my connection. Some places it's quite common.

The thing is you are both missing the point:

The tech landscape is growing increasingly complex. We shouldn't be adding more obstacles to getting involved than we already have.

That's how horrible Legacy-things get built. We don't need to do that to our internet.


It's only a barrier to entry if you insist on using http 2.0 which is a CHOICE you make. Don't use http 2.0, just like you can choose to use your internet connection for whatever.


But according to this discussion over at reddit, HTTP 2 seems to be made required for "real" sites:

http://www.reddit.com/r/technology/comments/1qj1tz/http_20_t...

So now it's NO choice after all. If you want to run a "real" site, not only must you pay rent for your DNS, you are now also being extorted into paying money to CAs. CAs which can be subverted by the NSA, so they're effectively worthless anyway.

That's a bad move. Internet should be getting cheaper, not more expensive.

This whole HTTP 2.0 affair is turning into a real piece of extremely short-sighted shenanigans. Given W3C's green-light on DRM in HTML, we should start questioning if we want to entrust them with these sort of tasks in the future. They have gone completely off the rails.


You misunderstood. Nobody forces you to use HTTP 2.0, you can use HTTP 1.1 or even 1.0.

And, again, HTTP 2.0 has nothing to do with CA prices.


>You have 10mbps connection! At your home! How much do you pay for it?

I hope you understand it's mbit. If so, I've got 50mbps over here and I pay €50 (that's about $67 USD) per month, which includes 50 TV channels and interactive TV (I'm Dutch, I hope I described it correctly)

The situation (s)he describes is fairly common; I've got a Raspberry Pi and an old laptop running as servers over here, on which I do experiments and host small websites. They've got StartSSL certs, which suck, but they do their job at least. If you put enough effort in the process and not just blindly fill in the forms, you'll get there.


So, it's not a problem for you to use TLS with HTTP 1.1? What's the problem with HTTP 2.0 then?


People who are really interested in programming will find a way, they always have.

Did you think typing source code from a magazine (which they didn't carry in your home town because nobody else was interested) wasn't a roadblock or barrier?

Waiting to have access to the family TV so you could plug in your microcomputer? Or saving up to buy a second-hand 12" TV set so you could use it in the bedroom? Paying $1500 (in inflation adjusted dollars) for a Commodore 64 was a pretty big barrier.

If none of those were barriers for you, then I'm sure your parents would have sprung the cash for an SSL certificate.


You can get free SSL certificates from StartSSL. There are also domain registrars who give you a certificate with domains you buy for no extra cost, and other CAs which are less than $10 a year.

And if none of those float your boat, you can always self sign if your willing to put up with the warning messages.

Your argument seems extremely hyperbolic.


If you think it as something similar to DNS it's not so shocking. Right now we are handing cash to get a usable domain name, or ask some other entity to share a space on their domains. Now you'll also pay for you SSL certificate or ask some other entity to let you share a piece of theirs.


There's nothing in this technology that demands payment. You can distribute your own root certificate to clients, or create a free certification authority (or join http://www.cacert.org/), if you'd like.


Cannot agree more. Besides $100 annually, a CA certificate is harder to manage (considering cloud hosting etc.), and it is not worth doing for many casual sites.

I predict that https only http/2 will be doomed.


I don't think so.

We are also "stuck" on IPv4 which hasn't been a problem.


Domain names will probably be sold in bundles with certificates, driving the price of SSL certificates lower.


SSL seems to conflate 2 ideas, proof of identity and an encryption layer. Would it be possible to have the encryption layer without requiring a third party to handle keys?


These ideas are inextricably linked! If you cannot verify the identity of the other end, you cannot verify that a man in the middle is decrypting, monitoring or altering, and then passing the data on to the real endpoint.


Yes, proposal (A) in the article suggests this.


Unless there's somewhere that will let me do domain-only validation, I'm not interested in it either. Sick of places leaking my information that has been "secured".


Have you tried StartSSL?


Yep, they ask for a bunch of information and have a human verifying it. Ideally I just want a domain-only verification (they used to exist, not so much now) that don't want any of my personal information.


They should just do what ssh does.


Totally agree. While this sounds amazing, the process of maintaing certificates can be a nightmare especially to someone not familiar with the security process. On the other hand, this does present a huge opportunity for a service that would basically manage your companies certificates.


I think complacency and thinking "they don't need it", even if it costs a few extra dollars a year, is the much bigger problem.

Some sites may stick with 1.1 for a while, but my guess is there will be a ton more sites who will be adopting HTTPS because of this.


Why is this the top comment, didn't anybody read the link? Come on, HN!

(hint: in most of the design options there wouldn't be CAs unless you used the https url scheme. See other messages in the linked thread too)


I hate to say it, but the W3C has been making quite a few dumb decisions lately.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: