Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
The Hotdog web browser and browser engine (github.com/danfragoso)
314 points by pabs3 on April 20, 2021 | hide | past | favorite | 144 comments


It's great to see yet another attempt at writing a web browser --- and I say this as someone who has been (slowly) working on one myself.

The browser is far from stable, spec-compliant, or even really useful, but, I'm slowly working on bringing more features and supporting more sites.

When the specs are constantly churning in order to keep one gigantic company's browser an effective monopoly, maybe it isn't really that important to follow them so closely... especially if you're aiming for something more like an actually-user-friendly (i.e. with the UI and controls you actually want, not some designer's flavour-of-the-month) hypertext document viewer than a web application runtime/OS. From that perspective, even HTML4+CSS2 would probably be quite sufficient.

Maybe if enough of these "minimal browsers" show up, people might even realise that basic HTML and CSS is quite sufficient for a lot of things and start creating simpler, more efficient sites, thus dissolving the monopoly. I realise it's going to be extremely difficult to fight corporate interests, but one can hope and dream...

https://qht.co/item?id=25915313


The idea of "minimal browsers" is a popular one and there are thriving communities based on this idea:

Gopher is a dead simple text-based protocol:

https://en.wikipedia.org/wiki/Gopher_(protocol)

Project Gemini is a slightly more powerful protocol for the small web:

https://gemini.circumlunar.space/

The tilde (~) community, which makes the small web social:

https://tilde.club/

I recommend you have a look at James Tomasino's Youtube channel, where he shows a bunch of cool stuff going on around Gemini, Gopher and tilde:

https://www.youtube.com/watch?v=DoEI6VzybDk

This is not for everyone, but if what you want is just basic text and functionality that can work from a terminal (though there are GUI browsers too), then this is perfect.


Neither Gopher nor Gemini are remotely close to being a replacement for the web, as neither support anything close to rich text or media.

Gemini doesn't allow for embedding images in web pages - which makes it vastly inferior to the web for any kind of interesting documents.

Imagine reading a research paper where, in order to view figures and equations, you had to follow a link to a separate object. No font control. No two-column layout. No anchors to allow you to jump to sections of the document. No metadata to inform you of the authors.

Gemini and Gemtext actively inhibit learning and knowledge dissemination by obsessing over pure plain text, which is bad at those things.


> Gemini doesn't allow for embedding images in web pages - which makes it vastly inferior to the web for any kind of interesting documents.

This statement is incorrect. Gemini clients can absolutely display inline images.

The difference is that default behavior is to require a user action to load a resource. An image can be a link, but when a user clicks that link it can turn into an inline image. This is how clients like Lagrange work. In other words, inline images can have delayed loading.

If a user understands the consequences (tracking, network usage) of doing so, this behavior can be changed to load images by default; however, authors should not expect users to do this and should write their documents accordingly.

Personally, I prefer a document to not have inline images; my gemini client opens images in my default image viewer instead. My window manager makes my age viewer float above other windows by default. This way, images "pop out" into a separate window that I can keep viewing as I scroll down in a document; I never have to scroll up to look at the last image.

> Imagine reading a research paper where, in order to view figures and equations, you had to follow a link to a separate object. No font control. No two-column layout. No anchors to allow you to jump to sections of the document.

These are all client-side features. Half the point of Gemini is for the user agent to determine presentation and leave semantic markup to authors. I don't want weird fonts or multi-column views, but you do; Gemini lets us both get what we want instead of having everyone see a one-size-fits-all presentation. Clients like Kristall even give you a TOC in the sidebar.

> Gemini and Gemtext actively inhibit learning and knowledge dissemination by obsessing over pure plain text, which is bad at those things.

Text is the only form of communication that can be understood by the sighted, blind, deaf, and machine (translation, etc) while being stored and transmitted without information loss. Text is good at knowledge dissemination.


> This statement is incorrect. Gemini clients can absolutely display inline images.

"clients" and "can" - it's not mandated by the spec, therefore, "Gemini" does not do it.

> The difference is that default behavior is to require a user action to load a resource.

Extremely non-conductive to thought. Again, take the example of a research paper - the difference between having every figure and formula appear by default and having to click-to-load is massive, with the latter being un-ergonomic and inhibiting comprehension and flow.

> These are all client-side features. Half the point of Gemini is for the user agent to determine presentation and leave semantic markup to authors. I don't want weird fonts or multi-column views, but you do; Gemini lets us both get what we want instead of having everyone see a one-size-fits-all presentation. Clients like Kristall even give you a TOC in the sidebar.

You can do exactly this same thing with the modern web with CSS styling and userscripts - the difference being that the web gives you saner defaults that are more conducive to thought, and Gemini clients seem to give you less-sane defaults that are less conducive to thought.

> blind, deaf

This is a limitation of being blind or deaf - someone who's blind wouldn't be able to view a sunset in real life. Obviously, though, while text can be read/listened to by someone who's blind or deaf, that doesn't make text a replacement for images, formulas, or interactive animations - those with those disabilities simply can't perceive the native forms of those things. Several hundred or thousand words describing a layout for a PCB is not equivalent with an image of the layout.

...and, modern webtech has accessibility properties that allow for annotation of non-text media with text. Gemini? Does not.

> machine (translation, etc)

False. Machines cannot understand plain text - it must be parsed. English (and other natural languages) are not machine-parseable, and machine-readable plain text had no reason to not exist as structured data in the first place.

> while being stored and transmitted without information loss.

All of the other kinds of electronic data that exist in the modern web can also be stored and transmitted without information loss, so this is not a special property.

> Text is good at knowledge dissemination.

Relative to text+formulas+images+interactive visualizations? Absolutely false.

Show me how to write out all of the variants of the Schrödinger Equation[1] in plain text, while still making it as readable, understandable, and useful as the mathematical formulas.

Show me how to phrase, in words, a 3D circuit layout, such that it's easier to understand and manipulate than an interactive model.

Show me how to describe the sound of a violin.

Webtech gives you text and images and sound and formulas and interactivity. Gemini gives you text, and that's it. Having to click a separate link to go to a separate object makes it not "part of Gemini" and the user experience is clearly, massively worse.

[1] https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation


> "clients" and "can" - it's not mandated by the spec, therefore, "Gemini" does not do it.

Prohibiting clients from loading inline images is not mandated by the spec either, so Gemini doesn't prevent it. Loading images inline is fine, as long as it's triggered by a user action. A core idea of Gemini is user control: network requests shouldn't happen without user consent just as presentation should be determined by the user agent.

Non-spec-compliant behavior is also fine if it's explicitly enabled by a user; the default should be spec-compliant.

> You can do exactly this same thing with the modern web with CSS styling and userscripts - the difference being that the web gives you saner defaults that are more conducive to thought, and Gemini clients seem to give you less-sane defaults that are less conducive to thought.

Try changing your browser's default background color and you'll end up seeing a bunch of pages with black text on a gray background. Change your browser's default text layout to two columns and see how many sites still work. The featureset of the web encourages authors to use those features, which begets complexity; complexity begets fragility.

Also, I'm not sure what you mean by "sane defaults"; The "default" HTML presentation is raw markup, and isn't exactly readable. The "default" Gemtext presentation is perfectly readable; in fact, all but two of the blog posts on seirdy.one were initially drafted in raw gemtext rather than markdown. Perhaps you were referring to the default stylesheets of the major browser engines. This is client behavior, and should be compared with existing Gemini clients that focus on presentation as well.

The web allows authors to dictate presentation and deliver content with visual branding; Gemini prevents this to make the focus on content rather than form.

> ...and, modern webtech has accessibility properties that allow for annotation of non-text media with text. Gemini? Does not.

Like the Web and Gopher, Gemini links have display-text. Image links are the same. I consume Gemtext with a screenreader quite regularly, and image consumption is much less painful than it is on the Web. Knowing that users will see text before an image encourages Gemini authors to use good alt-text and to only include images when they convey necessary information that text cannot. Superfluous images are virtually non-existent.

> Machines cannot understand plain text - it must be parsed. English (and other natural languages) are not machine-parseable, and machine-readable plain text had no reason to not exist as structured data in the first place.

That wasn't my point; my point was that text can be parsed and processed by machines much better than other forms of information, improving information dissemination.

> All of the other kinds of electronic data that exist in the modern web can also be stored and transmitted without information loss, so this is not a special property.

Unless you want to load a bunch of 5mb images, you're going to need re-sizing and lossy compression. https://xkcd.com/1683/

> Show me how to write out all of the variants of the Schrödinger Equation[1] in plain text, while still making it as readable, understandable, and useful as the mathematical formulas.

I admit that Gemini isn't great at mathematical formulae. Some people are working on Gemini clients that can understand LaTeX code fences.

> Show me how to describe the sound of a violin.

Include a link to an audio file so it plays when the user wants it to. Several clients can play inline audio and video.

---

Gemini isn't for everyone and everything, and that's kind of the point. It certainly doesn't seem like something meant for you, since you seem to be focused on research papers and apps. It's not trying to replace your OS, it's trying to be a part of it. Gemini also doesn't intend to replace the Web; it intends to focus on structured hypertext. The web became a steaming mess because of the feature overload you've described; the solution is to focus on being able to do fewer things and to restrict what's possible to prevent the same thing from happening.

I don't think we'll see eye-to-eye on this, because this looks like a value-based discussion on quality versus quantity to me when I don't think one is trying to replace the other. Alternatives aren't replacements; Gemini is an alternative, not a replacement.


> Gopher is a dead simple text-based protocol

Shameless plug for my own Gopher Client:

http://www.mattowen.co.uk/gopher/gopher-client-browser-for-w...


> The idea of "minimal browsers" is a popular one

RSS in a terminal saves time and energy.


Shameless plug for my minimal web browser: https://rhapsode.adrian.geek.nz/

A voice UX was surprisingly easy to implement (given a good platform to build on), newer standards just get in the way of the experience!

I'll tackle a visual one soonish...


I learned about Gopher at the same time I learned about the World Wide Web, in a kids book about Cyberspace in 1996 or so (I was a six year old!).

It’s been fun revisiting it as an adult!


Is there a modern version of lynx/links using an engine like WebKit? I always thought this would be useful but must admit I never got around to writing it.


Browsh[0] might come close to what you’re looking for. It’s not strictly designed to be less memory or faster - better for bringing up a browser on a remote system as it still uses a headless instance of Firefox behind the scenes.

[0]: https://www.brow.sh/


It depends by what you mean. Do you mean Command line accessible version? If so, there are several that work in terminals. Do you mean text only? If so, there are several browsers that can do this for you :-)


Care to provide some working examples? Many that can be found on Google appear to be working prototypes that are not actually usable.


Depends on your idea of "usable" of course:), but w3m?


> When the specs are constantly churning in order to keep one gigantic company's browser an effective monopoly, maybe it isn't really that important to follow them so closely...

The notion breaks as soon as your users want to use youtube or gmail, which for some mysterious reason keep insisting on using all these useless "standards" even though they have no benefit to the user.


Youtube and gmail will mysteriously break for people on Firefox too, so it's not that much worse.

/sarcasm (sort of...)


Regular Firefox user here with a YouTube Premium account. I use it all the time and have never had any issues.


Just a couple weeks ago, you couldn't use the spacebar in YouTube search in Firefox only.


Pretty sure those sites break in Edge and Brave as well...


They break them on purpose so people will switch to Chrome. And the worst part is, there's no evidence (yet?) that there's malice behind it. Maybe we'll get a whistleblower at some point, but that'll only happen when their income no longer covers buying off their conscience.


What exactly is broken in Firefox on YouTube or Gmail? I'm not a heavy Firefox user, but when I try it every once in a while, I've never noticed anything broken on any of the Google's sites, even Maps and Docs worked just fine - but then, maybe I wasn't trying out some advanced features.


> What exactly is broken in Firefox on YouTube or Gmail?

Performance. That makes the sabotage so insidious!

https://tech.co/news/google-slowed-youtube-firefox-edge-2019...

https://addons.mozilla.org/addon/disable-polymer-youtube


My wife and I have had a fanless PC hooked up to our TV since 2011.

Firefox is the only browser we use and we watch Youtube every day. If it broke, I'd know about it or hear about it.

It never breaks.


Do they only break for users in the United States? I've been using Firefox exclusively since about 2.0, and I've never had any of the Google projects break for me. Not once. Yet this seems to be a pretty common problem around here.


I'm from Europe and on one occasion Youtube was broken on Firefox but not on Chrome. It was fixed in the day or two.


In my experience, YouTube in-video links (those that e.g. show thumbnails of videos and are pointed at by vloggers with their fingers) have been broken in Firefox for years. Not that I complain, I've always been annoyed by those. I think it took me 2 years to realize that after seeing vloggers point at invisible things. I had to double-check in Chrome. (I just assumed that they forgot to add the links when editing.)


Those links don't appear on my Android-based Youtube clients that come with my cable box and Sony TV. The vlogger's hand points to nothing. It's amazing how second-class Youtube on Android TV is, maybe someone at Google hates testing it. I still manage to use it for about 75% of my Youtube usage. My wife and I are definitely going through a Youtube phase, which will probably end when the ads reach a tipping point.


Incompetent vs malice...

If they are close to the edge cases of the spec, just not testing on Firefox will break it. By negligence, rather than malice.


For YouTube, use youtube-dl or VLC (it's capable of playing YouTube videos directly).

For gmail, use an IMAP+SMTP client of your liking, of which there's no shortage.


Or there's https://invidio.us/ !


This notion also breaks when user wants to use govt and financial (bank, etc) web-sites.


Depends on the gov, in Europe they tend to follow usability guidelines


True, a browser is now necessary in these cases. However, my government and banks have strong incentives to make their services accessible and not break usability.


> as your users want to use youtube or gmail

Yes, as long as a requirement is something those corps control, they'll essentially control you.

The answer is to ignore support for gmail and youtube and hope their reputation catches up to them - specifically youtube alienating creators and becoming yet another corporate on-demand television-clone.


mail.google.com and YouTube.com are just frontends.

There are alternative and better ways to access gmail and YouTube.

I just found this one for youtube and twitter for example: https://www.reddit.com/r/privacytoolsIO/comments/jlkqxa/any_...


I feel ya. I'm refactoring my CSS parser the third time now, because I needed to support nested media queries (and therefore the logical condition "spec").

It's amazing how complex even CSS has gotten. And implementing HTML without the ISO SGML spec and the SGML handbook is close to impossible.

So many edge cases, so much layouting overhead, and so much damned flow root types.

Honestly in the beginning I thought "how hard can it be, it's just like XML"... I was so wrong about this.


This is exactly it. I looked at doing something like this, ok, I looked at doing this and decided early to forgo CSS because it's so very very complicated.

Truthfully, if my skillset had better matched the task I might have been lured into giving it a go. On another project I considered using CSS as a styling technology and got down to really understand it in detail. I then realized how good my 1,000 yard vision really is.

I can't speak to the forces which inevitably lead to decay but essentially, for some reason every technology tries to eat the world.

CSS is trying to eat the world and HTML5 is trying to eat the world and of course Javascript is famously trying to eat the world.

All these technologies (and here's the part where I see my post begin to fade to gray) are on their way to experiencing technology's version of societal collapse.

I use that analogy because it's so so apropo.

They are overwrought, overly complex systems yielding only marginally better results and being maintained at huge cost in terms of attention, brain power and collateral damage by everyone. The benefits accrue to a smaller and smaller number of people (FANG et. al) who do not have society's best interest at heart at all and are very far from the founding principals which inspired the original vision.

A simple scriptless HTML 1.0 browser minus the blink tag would deliver at least to me nearly 100% of the benefit I get from the web which can be characterized as "seeing what is happening, seeing what other people think, learning new stuff and downloading stuff".

I would love to start a (reactionary) movement away from the current web composed of a privacy-preserving HTML 1.0 browser capable of HTTPS and people dedicated to creating pages and resources for it. I don't know of any such "movement" .

If anyone is aware of anything like this do share.


> If anyone is aware of anything like this do share.

Well, I actually tried to start a movement for that [1].

The idea is to offload as much as possible to trusted peers, and to refine the web with a trust model where the user has to trust a website specifically to deliver expected things from the user's side (e.g. a news website should have no right to shove videos down your throat).

I also think that a lot of web browsers tackle the privacy problem wrong. "User Privacy" is not sending a user-agent to a server, or downloading a resource from it in a statistically easily detectable manner.

Real privacy is not having to download anything from the web server at all, by offloading requests to its peers. In my Browser [2] I'm trying to have every metadata, configuration or observation (and extraction) federated. I believe that the real strength of peer-to-peer is not decentralization; it is federation and liberation.

[1] https://tholian.network

[2] https://github.com/tholian-network/stealth


Agreed.

Except (half kidding but only half kidding here) augmented for things like cat videos; previous generations watched tv for entertainment, and for many people that has been partially or entirely replaced by the internet.


Eh, SGML spec is entirely unnecessary. Use HTML5 parsing spec. ("The HTML syntax" chapter.)


I've been arguing this for a while: I want a browser that deliberately cannot run random scripts; only from a vetted source.

That source will contain stuff like autocomplete and htmx or similar.

If something doesn't work with that it asks you if you want to enable "bloat mode - warning: unsafe" and I the user chooses it enables a full browser engine.


uBlock Origin with JavaScript disabled by default comes close and I'm currently using it like this.

You can temporarily re-enable scripts for a website, or "lock" the reactivation for websites requiring Javascript and you need to use regularly.

My computer is once again cool, fast and quiet.


NoScript goes a pretty long way to being what you want, while still being usable. You can whitelist scripts per-domain, so you can still block the garbage while making the site functional.


LibreJS is probably what you're looking for.


A good suggestion, but LibreJS is about blocking non-trivial JavaScript which isn't Free Software, it's not about vetting specific JavaScript files. A malicious website can declare its JavaScript to be Free Software, and LibreJS will then permit that script to run.

* https://www.gnu.org/software/librejs/

* https://en.wikipedia.org/wiki/GNU_LibreJS


> Maybe if enough of these "minimal browsers" show up, people might even realise that basic HTML and CSS is quite sufficient for a lot of things and start creating simpler, more efficient sites, thus dissolving the monopoly.

I feel like some initiative to establish a super 'light' version of the html/css specs might be a very good thing...browsers are so far gone out of the hands of individual coders or even small teams now because of their complexity.


This is something I had been pondering. Rather than making a new 'standard' based on a new technology and thus expanding the standards base with complexity, we can go the other way.

Make a specification based on old tried and tested tech. Thinking something like a set HTML/CSS specs that can be implemented fairly easily. It is a spec that is essentially something a website can be built to knowing that end browsers/users can anticipate being able to render. It doesn't need anything new to be added into current browsers but it is simply enough that others can built their browser too.

A vague standard I figured would be that a single person should be able to implement the full spec from the ground up in about one years full time work. If done in a group in a free/open manner it could theoretically be done quicker. That said there is the old joke. Two programmers can do in two months what one programmer can do in one month.


| a set HTML/CSS specs that can be implemented fairly easily

That could then be heavily optimized and used for cross-platform apps instead of the bloated, kitchen sink approach of Electron.


Exactly. It allows a lower bar for entry for new players. Thinking of things like KaiOS.


https://www.w3.org/TR/xhtml-basic/

Though I think you're restricted in feature selection more by reality than by specifications. CSS is designed with assumption of designer competence, which modern webdev struggles to achieve, a better strategy is to replace CSS with semantic markup and this way integrate with user styles.


Piggybacking on this: there's a lot of redundancy in the Web specs, so it may actually be fruitful to start with features which are more expressive/consistent and leave the older ones to polyfills.

For example, if we want JS then it might be worth tackling that first, and bootstrapping the rest (e.g. rendering to one big canvas to begin with).


Isn't that what Google AMP was meant to be?


No; AMP includes the whole CSS spec with a few exceptions: https://amp.dev/documentation/guides-and-tutorials/learn/spe...


Google amp is not designed to be simple to implement to my knowledge, just to be fast to load and to tie you to google services.


The way I see it is that for many people web applications have replaced their need for desktop applications while giving a somewhat comparable experience. I think it would be a very hard sell for the average user to want to use a "minimal browser" because it could mean giving up a lot of day to day functionality they rely on.


It doesn't habe to be "one browser to view them all" - take a simple, robust one for random web and a bloated one for trusted applications.


My dream browser would have two engines -- a simple, very fast engine for "document-like" web pages and a more complex engine that can be loaded on demand using a UI similar to NoScript.


Do normal document like web pages render slowly in modern browsers for you?


Some disclaimers: I am not an expert on Chromium or even web development and this is all quite ignorant and subjective.

But I have noticed that plain text is sometimes very slow to render in Chromium browsers (Chrome, Edge, Vivaldi) under heavy load. The issue may to be related to a process-per-browser-tab architecture: a bloated and fragmented block of memory attached to a tab/process can’t be easily freed / allocated. So if you’re on a tab that was previously loading lots of stateful JS, then switch to plain text, Chromium might get stuck in memory management for the tab instead of short-circuiting its architecture by allocating memory solely for the page.

I am not at all an expert here and don’t know how Chromium works under the hood. I don’t think it’s literally one-process-per-tab, I just have a vague sketch of what the problem might be here. But I think “idiots like nicklecompte have 700 tabs open and complain that tab 361 doesn’t load plain text quickly” is a problem that’s very difficult to solve in general, even if a dedicated “simple” engine might offer a lot of case-specific fixes.


IME: most pages that are poorly controlled enough to include "tag managers" are unusable with javascript enabled.


honestly, I have been toying with the idea of building a web browser project that has no JavaScript. just completely ignores the script tag or any JavaScript related things. like onclick, and other attributes.


Really what did JavaScript bring us? (And there goes my points!)

I want to search for information and share information with people.

I don't want any more from the net. It feels like what we want is being surpassed by those who want to monetise us.


Lightweight sandboxed applications in the browser. No admin password needed since there's nothing to install.

Sure it may be bloated overkill for something that could have just been a plain text file, but it is nice to have real time and interactive visualizations.

Plus it is nice to have the choice to use mobile websites instead of using the access hungry native apps.


Figma, docs, sheets. All great apps I can use for any device with a web browser. No install needed, no updates needed. Easy to collaborate with. No "install this application" just "open this url"

Application delivery though the web browser is just hella convenient. It's really hard to give that up for some "native apps and static documents" utopian dream.


The idea of "web applications" is almost as old as the Web itself. Java Web Applets, Flash, Silverlight,... were all attempts to bring application functionality to the browser. Billions have been invested in this strategy.

Why?

Because desktop computing used to be a battlegrounds for commercial vendors in the late 20th century, and the goal was establishing market dominance. Being able to control who can run what on a platform was / still is part and parcel towards establishing that goal.

Web browsers changed the game. They are a threat and an opportunity at the same time. A threat because gave anyone a chance to escape from a native context and run whatever you want in a browser regardless of the platform your on. No more having to compile and distribute the same application for a dozen potential targets.

Microsoft was so adamant on having Explorer bundled with their OS in order to establish control over the future evolution of web applications on the information highway. And they got famously burned for it in that 1999 anti-trust case.

Application delivery as you know it today is convenient, but that came at a price. Vast amounts of resources have been poured into Chromium over the past two decades to bring that experience to billions. And it didn't happen out of sheer altruism on the part of Google.


The fact that web applications are so old indicates the demand for such a delivery platform.

> Microsoft was so adamant on having Explorer bundled with their OS in order to establish control over the future evolution of web applications on the information highway. And they got famously burned for it in that 1999 anti-trust case.

Totally. Microsoft was using it to try control the web as a Microsoft platform. Hence their push for ActiveX over flash/applets/javascript.

> Vast amounts of resources have been poured into Chromium over the past two decades to bring that experience to billions. And it didn't happen out of sheer altruism on the part of Google.

At the time Chrome was started it was a more or less altruistic move from Google, from the user's perspective at least. Google was heavily reliant on the web for income and existing browser were slow, had widely varying standard support, and lots of security issues. Chrome forced their hands, by showing that a web browser can be fast and "secure."

Also at the time Google have a significant platform of their own. They would be at the mercy of the platform gatekeepers. So pushing an open platform that anyone can publish on was in their own interest.

Since then Android has taken off and Chrome has morphed into arguable spyware, but at its inception it was a good thing for users.

> Application delivery as you know it today is convenient, but that came at a price.

A price to whom though. To those who would try to lock down our platforms and seek rent over application delivery? I guess I don't really care about how much it costs them ;).


You can disable JavaScript by default for some time and then look at the list of websites and apps where you enabled it again. It has been a very frustrating experience to me.


It's quite maddening. The only thing I've done with JavaScript is tracking customers, even if they sent logged in.

Now I'm a very unique slice, I only take quick contracts from upwork and such as I don't have the pedigree to get a proper job in coding.


Your frustrating experience was before you disabled javascript or after?


After. Almost everything is broken. Keeping JavaScript enabled with a privacy/ads blocker is better for me.


I use Firefox with uMatrix to basically block any JavaScript that I don't explicitly enable. Twenty years ago, browsing the internet without JavaScript was fairly easy. But these days, it seems like upwards of 60% of websites simply don't work at all without JavaScript.

I wonder if there are any projects that help people find sites that don't use JavaScript. Search engines that only index JavaScript-free sites, old school "blog rings" that only link JavaScript free sites together.


You will get nowhere, the web is unusable without JS. Try running Firefox with JS disabled sometime.

You can use a plugin like NoScript to control what scripts run, it's a pretty decent compromise.


> someone who has been (slowly) working on one myself.

Well... that's kind of exciting! Is your work public? Would love to see it.

> maybe it isn't really that important to follow them so closely... especially if you're aiming for something more like an actually-user-friendly (i.e. with the UI and controls you actually want, not some designer's flavour-of-the-month) hypertext document viewer than a web application runtime/OS. From that perspective, even HTML4+CSS2 would probably be quite sufficient.

Yes. So much yes.


If choose not to comply with specs and stay in the experimental land, why not invent the world wide web from ground up? We don't need to use the bloated HTML / CSS / JavaScript / HTTP, designing brand new protocols, reimagining how people connect through the internet, and building clients / servers for them is super fun


Come on, everybody, this deserves a huge round of applause, for encouragement and for the effort. We need a lot more projects of this kind; we need some options for rebooting the browser scene from down below, however limited in scale and scope and functionality. I stick to my Firefox, crippled and bloated as it is these days, but I know full well that sooner or later it's going to fold, and I shall be stranded inside the Google nightmare. NetSurf shouldn't have to shoulder the complete oppositional burden all by itself.


There's Dillo too, but unfortunately last time I checked neither of them were even as close to usable on most sites as Opera 9.x (which was already many years old at the time, but had been quite popular for a while before that.)


And furthermore, when are we getting open source Presto? I know Opera stopped developing it around 2013, but there's still probably plenty a FOSS browser could salvage.

(I hear the source is floating around somewhere, but without a free license it's (unfortunately) likely to attract problems.)


I was really hoping that this was some sort of reboot of the HotDog[0] HTML editor by Sausage Software[1].

[0] https://archive.org/details/tucows_194462_HotDog_Professiona...

[1] https://en.wikipedia.org/wiki/Sausage_Software


I remember that they made an special version just for me. An update had removed a feature i used a lot.

A while later I helped them track down an error they couldn't find the cause of. They asked how they could thank me and i replied "bring back feature x"

For a little while they put up a link to a "Tony-edition" on the official download site until they got the feature back in the regular release. I still have it somewhere.

Hotdog was my first html editor. Brilliant is was. Happy memories



Same. Hotdog was the first html editor I ever used as a kid and got me interested in creating on the internet. It felt revolutionary to me.


I was anticipating clicking on it hoping for the source code - because at the time I was writing an HTML editor myself (as a young teen) and was blown away by some of what HotDog could do. Would love to peer into some of those older projects.


I had hoped this as well. I spent so much time in that program building web sites long ago. We have a plethora of excellent dev oriented editors today, but a purpose built one can be so much better by simplifying workflow.


Gotta be a nod, right?


Written in Go. This is presumably good for security.

Curious that the components are named ketchup, mayo, mustard, sauce, bun, and gg. There's an obvious omission here, although it has the advantage of being vegan friendly.

Also, somewhat related: it's a pity the Servo project is going nowhere. I don't mean to put this project down, but Servo was the only realistic shot at a truly new, truly usable Free Software browser.


Servo isn't even close to being production ready. Every one seems to make Servo out to be something it wasn't. Anyways the Linux Foundation picked up the project, so your dream may come true some day.

Servo was primarily a test bed for Firefox and all the components that they wanted to get into Firefox eventually made it.


Sweet! I’ve also been working on my own (toy) browser from scratch following https://browser.engineering - would definitely recommend to anyone who wants to get started on their own, granted it’s still a work in progress.


This is a fun learning project, but I don't think it's really a browser except in name. The code seems to implement the behavior someone thinks a browser might have, not what the specs actually say.

Ex. https://github.com/danfragoso/thdwb/blob/655eac96e4faa141cb4...

That's not an html parser. It's just some code that regexes strings looking for brackets.

Projects like this are good ways to learn and have fun, but they're many years away from being a browser, even if we limit the scope to the specs of say 2012.

Also fwiw, if you're implementing a browser use the web platform tests instead of writing your own:

https://github.com/web-platform-tests/wpt


> ketchup (html parser and DOM Tree builder) mayo (css parser and Render Tree builder) mustard (UI Toolkit, events and OpenGL) sauce (requests, cache and filesystem) bun (css layout calculator)

Off topic, but I've encountered some more serious projects that used strange nouns to name their components. I've never figured out why, but it usually results in me spending more time cross-checking what each component does. Can anyone comment?


Because it's fun and if you work regularly with the codebase it makes no difference. Organizations adopt terms that aren't strict descriptors of the things they refer to all the time, stuff like the Johnny.Decimal system, someone will refer to "form 4.54" or "being compliant with 13485" and everyone will know what it refers to even though it's a perfectly opaque name to outsiders.


Engineers think they are being cute when they do this. In reality it hampers new devs because you can't just look at a component and have an idea of what it's doing. I've been on projects like this and after a year I was still second-guessing myself.


I used to do this. Now I use 'boring' names. My funny joke is not very funny anymore spin 2 years on (even then it was only slightly funny). Then you end up with a lookup chart for what does what. When if I had just used 'normal boring' names in the first place I would not be trying to remember what object 'funnyname' does vs 'movedatatodb' object name.


Love the food-centric naming conventions for all the components. Can we take a moment to appreciate themed naming conventions in software ecosystems?

It’s a great conversation starter, and it’s something that will keep us smiling even when things get serious.


I upvoted not because I agreed (I found the naming conventions distracting and inconsistent) but because why the fuck was your perfectly innocuous comment downvoted?


I'm not sure, but it made me sad. I don't think we are supposed to talk about downvotes, so I just grinned and bared it.

Thanks for offsetting that vote for me.


We aren't supposed to talk about downvotes but goddamnit


I also like it, despite I have to admit: As a vegan person, I would not like to contribute to software parts named after sausage ;-)


I am not a vegan, but I have had vegan (not imitation meat!) sausages that were absolutely delicious! But I admit the likely connotation is pretty meaty


Interesting they've got a screenshot of SerenityOS.org at the end of their home page, curious if their looking at SerenityOS's home grown LibWeb [1] C++ implementation for inspiration in their Go code-base?

SerenityOS is another Indie OS effort which plans to build the entire POSIX OS, Kernel and core Apps from scratch, one of the Apps their is their LibWeb browser engine complete with their own LibJS JS VM which already passes the vast ECMAScript test suite [2]. One of its USPs of SerenityOS is being able to make changes to its code-base and instantly reboot the OS in seconds with the changes, never seen this done for an OS before, the turn around time allows for some impressive dev iteration speed.

Andreas videos on developing LibWeb/LibJS is one of the best resources I've found explaining how to implement a web browser, e.g. in this video he goes through the HTML Specs which have enough info in them to develop a HTML spec parser whose behavior is the same across all browser engines:

https://www.youtube.com/watch?v=7ZdKlyXV2vw

Most of the interesting parts of LibWeb/LibJS is captured on video that has a unique skill of being able to write code really quickly whilst explaining each step. There must be close to 100 videos on implementing different parts of the Web Browser on his YouTube channel, e.g:

https://www.youtube.com/c/AndreasKling/search?query=html

https://www.youtube.com/c/AndreasKling/search?query=js

https://www.youtube.com/c/AndreasKling/search?query=css

https://www.youtube.com/c/AndreasKling/search?query=canvas

[1] https://github.com/SerenityOS/serenity/tree/master/Userland/...

[2] https://github.com/SerenityOS/serenity/tree/master/Userland/...


Ultimately, pursuing a full-featured graphical web browser might not be the right approach for a volunteer-/hobby-driven project to develop a web browser. But it would absolutely make sense that a newly implemented terminal-based browser in Go (or any other memory-safe language) could/should replace Lynx and w3m.

It's actually pretty surprising that there hasn't been a niche surge in websites specifically meant to work well in textmode browsers, considering how many programmers claim to spend 90% of their time inside either a terminal window or in a web browser. Even now on HN I'm typing this in Firefox. And Sourcehut, too is widely acclaimed for being "simple" and/or "minimal", but it's not exactly great (or even clear) to look at in the two browsers I just checked.


This has been happening, just not really on the web. It’s more located on protocols like gopher[0] or Gemini[1].

Gemini in particular has quite a large amount of projects hosted on sourcehut, and sourcehut even serves pages on its new pages service to Gemini.

[0] https://en.m.wikipedia.org/wiki/Gopher_(protocol)

[1] https://gemini.circumlunar.space/


This is a comment advocating for Gemini (which is from the start a misguided endeavour) with no concrete examples of how the simple use case mentioned above (the straightforwardness (or not) of trying to use Sourcehut from the terminal) is actually solved in Gemini space.


I’m a bit disappointed by the naming convention. There is mayo and sauce components. Mayo is not a typical hotdog condiment and sauce seems a bit generic. And then there is gg.

What about sausage, sauerkraut or onion?


I beg to differ - my wife always puts mayo on hot dogs. She's Mexican, though - maybe it's a cultural thing?


Sonoran dogs are amazing. Definite contender for best drunk food in the world. https://en.wikipedia.org/wiki/Sonoran_hot_dog


Dill pickle, tomato, peppers, relish.


Love the opinionated swearing rant that is the sample website! (See screenshots at bottom).

Yes, simple vanilla HTML is inherently responsive. And much of the modern web is rubbish.



The takeaway for me comes from when I tried the first of those links, and then pressed back to come back to HN, which Chrome took embarassingly-long to render.


i thought it was related to the old hotdog code editor. think i used that to learn html back in the early noughties.. better, simpler days - i really miss them.


It's possibly merely a coincidence. That was over 20 years ago and the author has another project "mustard". Guy looks to be somewhere under 30 from the photos I can find.

What an odd name to hit twice in the browser world.

Then again, there's Viola, Cello, and Vivaldi though ... so who knows, maybe the population size is way larger than I pretend it is.


That's what I expected to find.


If you ever decide to actually go somewhere with it, maybe a lightweight alternative of Electron might be a good path to take.

In particular a browser engine that allow the programmer to easily turn off things that are not used and take away from performance will probably be a real killer.

Good luck!


I think it's a cool project, and might even lead somewhere. At the moment we badically have only Webkit and Mozilla baded browsers available

It definitely shows how mature and versatile golang has become.

Are you building this based on a tutorial?


The product’s naming conventions make me think this would have been an ideal post for the first of April. It certainly gave me a good chuckle.

With that said, we need more independent web browsers. Good for them.


Interesting


How come entire operating systems are able to be developed on community-supported efforts (like Debian) but browsers can only be developed by monoliths like Google and Apple (and Mozilla, which has ~750 employees and makes most of its money from Google)?


In practice, many of the components used in desktop Linux distributions are developed by an IBM subsidiary called "Red Hat" which has 12,000 employees. Systemd, GNOME, PulseAudio, X and Wayland, all of these are primarily maintained by Red Hat employees operating under various shell organizations.


Most of these were developed and created when "Red Hat" had no ownership by IBM.


Chrome is based on WebKit, which itself is descended from KHTML. That was essentially a little widget for formatting help pages in KDE and only had a handful of developers. In fact, it was chosen as the basis for WebKit because it was minimalistic clean code.


KHTML spend all of its life as a full browser engine, namely for KDE Konqueror browser. The KHTML predecessor khtmlw was a simple widget rendering lib.


Blink (Chrome's renderer) is based on webkit. A browser has many components.


The only general purpose, open source operating system that can now compete with the commercial giants, Linux, is now mainly driven by corporate backed contributors. If all corporate backed efforts were to stop overnight, I doubt it could keep up with MacOS/Windows/Android with just community contributions.

The entire hardware/software industry is controlled by giants. Disrupting this can no longer be done by a group of hackers as it was possible in the late 80s and early 90s.


What's there to "keep up with"? UI-wise, desktop operating systems have been pretty much stagnant for a decade, and software-compatibility-wise, people already weren't expecting Linux to run all their newfangled commercial software.


Drivers and security would be my guess.


First Debian is a distro composed of several third party software. The closest equivalent to browsers in scope in Xorg and that has gone in maintaince for being unmaintainable. Web browsers could be simplified if designed differently(internal architecture). We browser are monolithic programs but with stuff like nodejs, gjs, web view, pwa, etc. , Parts of browser must really become system library (that has some stability) and daemon that run on login. Aka the solution to the bloated web is modularisation of certain components.


Interesting idea, but what would be the advantage of having, for instance, the javascript engine running as a system daemon? This reminds me of Windows Script Host, which was some similar idea but didn't make it. What we see instead are JITs or VMs such as the JVM, node or any other language (such as raku or julia) which just work standalone, in the same way as a browser does.


Is engine prob is better of a s a system lib. The Daemon are for stuff like service workers from pwa, they are already boardline apps(notifications for example). The biggest advantage of the system lib style is you can effectively share with native program. It also standard cross platform. A not so bloated nodejs and electron setup.


I agree with the system library, and this should be standard. A system library and a system daemon are two different concepts, thought. HTML engines as shared libraries were common in the past (thinking of KHTML or mshtml.dll) and still are (thinking of Qt WebEngine or WebKitGtk). They are not that popular any more because devs want to have 100% control about the concrete engine version and quirks, so they prefer shipping their own engine, as with Electron. I feel that there is a strong rejection against Electron-like application development/shipping.


Isn't that Brave's niche?


Brave is chromium-based.


Does it pass the Acid3 test?

I haven't thought about that in a long time. Not since I was using Konqueror + khtml engine.


Fat chance. It doesn't implement any CSS selectors at all.

Servo passed Acid2 in 2014: https://research.mozilla.org/2014/04/17/another-big-mileston... This is really far behind.


It doesn't appear to implement any standards at all, even self contained ones like the HTML parser: https://github.com/danfragoso/thdwb/blob/655eac96e4faa141cb4...


Firefox and Chrome both don't on my machine ... so probably not!


Screenshots are LACED chalk-full of explicit vocabulary...

https://raw.githubusercontent.com/danfragoso/thdwb/master/im...


"It's better to be explicit than implicit", perhaps.

That site had its own discussion on HN many years ago: https://qht.co/item?id=6791297


That website gets posted here all the time as an example of lightweight HTML.


I just see regular words there. What's the big deal?


It seems fair enough to me to warn people about rude words. There are still some people who are offended by them (to be fair, I can't imagine many of them are here), and there are quite a lot on that page.


Thank you for the warning, Aussie... erm... Wog...


No fuckin' problem, mate. :)


Welcome to the internets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: