It's great to see yet another attempt at writing a web browser --- and I say this as someone who has been (slowly) working on one myself.
The browser is far from stable, spec-compliant, or even really useful, but, I'm slowly working on bringing more features and supporting more sites.
When the specs are constantly churning in order to keep one gigantic company's browser an effective monopoly, maybe it isn't really that important to follow them so closely... especially if you're aiming for something more like an actually-user-friendly (i.e. with the UI and controls you actually want, not some designer's flavour-of-the-month) hypertext document viewer than a web application runtime/OS. From that perspective, even HTML4+CSS2 would probably be quite sufficient.
Maybe if enough of these "minimal browsers" show up, people might even realise that basic HTML and CSS is quite sufficient for a lot of things and start creating simpler, more efficient sites, thus dissolving the monopoly. I realise it's going to be extremely difficult to fight corporate interests, but one can hope and dream...
This is not for everyone, but if what you want is just basic text and functionality that can work from a terminal (though there are GUI browsers too), then this is perfect.
Neither Gopher nor Gemini are remotely close to being a replacement for the web, as neither support anything close to rich text or media.
Gemini doesn't allow for embedding images in web pages - which makes it vastly inferior to the web for any kind of interesting documents.
Imagine reading a research paper where, in order to view figures and equations, you had to follow a link to a separate object. No font control. No two-column layout. No anchors to allow you to jump to sections of the document. No metadata to inform you of the authors.
Gemini and Gemtext actively inhibit learning and knowledge dissemination by obsessing over pure plain text, which is bad at those things.
> Gemini doesn't allow for embedding images in web pages - which makes it vastly inferior to the web for any kind of interesting documents.
This statement is incorrect. Gemini clients can absolutely display inline images.
The difference is that default behavior is to require a user action to load a resource. An image can be a link, but when a user clicks that link it can turn into an inline image. This is how clients like Lagrange work. In other words, inline images can have delayed loading.
If a user understands the consequences (tracking, network usage) of doing so, this behavior can be changed to load images by default; however, authors should not expect users to do this and should write their documents accordingly.
Personally, I prefer a document to not have inline images; my gemini client opens images in my default image viewer instead. My window manager makes my age viewer float above other windows by default. This way, images "pop out" into a separate window that I can keep viewing as I scroll down in a document; I never have to scroll up to look at the last image.
> Imagine reading a research paper where, in order to view figures and equations, you had to follow a link to a separate object. No font control. No two-column layout. No anchors to allow you to jump to sections of the document.
These are all client-side features. Half the point of Gemini is for the user agent to determine presentation and leave semantic markup to authors. I don't want weird fonts or multi-column views, but you do; Gemini lets us both get what we want instead of having everyone see a one-size-fits-all presentation. Clients like Kristall even give you a TOC in the sidebar.
> Gemini and Gemtext actively inhibit learning and knowledge dissemination by obsessing over pure plain text, which is bad at those things.
Text is the only form of communication that can be understood by the sighted, blind, deaf, and machine (translation, etc) while being stored and transmitted without information loss. Text is good at knowledge dissemination.
> This statement is incorrect. Gemini clients can absolutely display inline images.
"clients" and "can" - it's not mandated by the spec, therefore, "Gemini" does not do it.
> The difference is that default behavior is to require a user action to load a resource.
Extremely non-conductive to thought. Again, take the example of a research paper - the difference between having every figure and formula appear by default and having to click-to-load is massive, with the latter being un-ergonomic and inhibiting comprehension and flow.
> These are all client-side features. Half the point of Gemini is for the user agent to determine presentation and leave semantic markup to authors. I don't want weird fonts or multi-column views, but you do; Gemini lets us both get what we want instead of having everyone see a one-size-fits-all presentation. Clients like Kristall even give you a TOC in the sidebar.
You can do exactly this same thing with the modern web with CSS styling and userscripts - the difference being that the web gives you saner defaults that are more conducive to thought, and Gemini clients seem to give you less-sane defaults that are less conducive to thought.
> blind, deaf
This is a limitation of being blind or deaf - someone who's blind wouldn't be able to view a sunset in real life. Obviously, though, while text can be read/listened to by someone who's blind or deaf, that doesn't make text a replacement for images, formulas, or interactive animations - those with those disabilities simply can't perceive the native forms of those things. Several hundred or thousand words describing a layout for a PCB is not equivalent with an image of the layout.
...and, modern webtech has accessibility properties that allow for annotation of non-text media with text. Gemini? Does not.
> machine (translation, etc)
False. Machines cannot understand plain text - it must be parsed. English (and other natural languages) are not machine-parseable, and machine-readable plain text had no reason to not exist as structured data in the first place.
> while being stored and transmitted without information loss.
All of the other kinds of electronic data that exist in the modern web can also be stored and transmitted without information loss, so this is not a special property.
> Text is good at knowledge dissemination.
Relative to text+formulas+images+interactive visualizations? Absolutely false.
Show me how to write out all of the variants of the Schrödinger Equation[1] in plain text, while still making it as readable, understandable, and useful as the mathematical formulas.
Show me how to phrase, in words, a 3D circuit layout, such that it's easier to understand and manipulate than an interactive model.
Show me how to describe the sound of a violin.
Webtech gives you text and images and sound and formulas and interactivity. Gemini gives you text, and that's it. Having to click a separate link to go to a separate object makes it not "part of Gemini" and the user experience is clearly, massively worse.
> "clients" and "can" - it's not mandated by the spec, therefore, "Gemini" does not do it.
Prohibiting clients from loading inline images is not mandated by the spec either, so Gemini doesn't prevent it. Loading images inline is fine, as long as it's triggered by a user action. A core idea of Gemini is user control: network requests shouldn't happen without user consent just as presentation should be determined by the user agent.
Non-spec-compliant behavior is also fine if it's explicitly enabled by a user; the default should be spec-compliant.
> You can do exactly this same thing with the modern web with CSS styling and userscripts - the difference being that the web gives you saner defaults that are more conducive to thought, and Gemini clients seem to give you less-sane defaults that are less conducive to thought.
Try changing your browser's default background color and you'll end up seeing a bunch of pages with black text on a gray background. Change your browser's default text layout to two columns and see how many sites still work. The featureset of the web encourages authors to use those features, which begets complexity; complexity begets fragility.
Also, I'm not sure what you mean by "sane defaults"; The "default" HTML presentation is raw markup, and isn't exactly readable. The "default" Gemtext presentation is perfectly readable; in fact, all but two of the blog posts on seirdy.one were initially drafted in raw gemtext rather than markdown. Perhaps you were referring to the default stylesheets of the major browser engines. This is client behavior, and should be compared with existing Gemini clients that focus on presentation as well.
The web allows authors to dictate presentation and deliver content with visual branding; Gemini prevents this to make the focus on content rather than form.
> ...and, modern webtech has accessibility properties that allow for annotation of non-text media with text. Gemini? Does not.
Like the Web and Gopher, Gemini links have display-text. Image links are the same. I consume Gemtext with a screenreader quite regularly, and image consumption is much less painful than it is on the Web. Knowing that users will see text before an image encourages Gemini authors to use good alt-text and to only include images when they convey necessary information that text cannot. Superfluous images are virtually non-existent.
> Machines cannot understand plain text - it must be parsed. English (and other natural languages) are not machine-parseable, and machine-readable plain text had no reason to not exist as structured data in the first place.
That wasn't my point; my point was that text can be parsed and processed by machines much better than other forms of information, improving information dissemination.
> All of the other kinds of electronic data that exist in the modern web can also be stored and transmitted without information loss, so this is not a special property.
Unless you want to load a bunch of 5mb images, you're going to need re-sizing and lossy compression. https://xkcd.com/1683/
> Show me how to write out all of the variants of the Schrödinger Equation[1] in plain text, while still making it as readable, understandable, and useful as the mathematical formulas.
I admit that Gemini isn't great at mathematical formulae. Some people are working on Gemini clients that can understand LaTeX code fences.
> Show me how to describe the sound of a violin.
Include a link to an audio file so it plays when the user wants it to. Several clients can play inline audio and video.
---
Gemini isn't for everyone and everything, and that's kind of the point. It certainly doesn't seem like something meant for you, since you seem to be focused on research papers and apps. It's not trying to replace your OS, it's trying to be a part of it. Gemini also doesn't intend to replace the Web; it intends to focus on structured hypertext. The web became a steaming mess because of the feature overload you've described; the solution is to focus on being able to do fewer things and to restrict what's possible to prevent the same thing from happening.
I don't think we'll see eye-to-eye on this, because this looks like a value-based discussion on quality versus quantity to me when I don't think one is trying to replace the other. Alternatives aren't replacements; Gemini is an alternative, not a replacement.
Is there a modern version of lynx/links using an engine like WebKit? I always thought this would be useful but must admit I never got around to writing it.
Browsh[0] might come close to what you’re looking for. It’s not strictly designed to be less memory or faster - better for bringing up a browser on a remote system as it still uses a headless instance of Firefox behind the scenes.
It depends by what you mean. Do you mean Command line accessible version? If so, there are several that work in terminals. Do you mean text only? If so, there are several browsers that can do this for you :-)
> When the specs are constantly churning in order to keep one gigantic company's browser an effective monopoly, maybe it isn't really that important to follow them so closely...
The notion breaks as soon as your users want to use youtube or gmail, which for some mysterious reason keep insisting on using all these useless "standards" even though they have no benefit to the user.
They break them on purpose so people will switch to Chrome. And the worst part is, there's no evidence (yet?) that there's malice behind it. Maybe we'll get a whistleblower at some point, but that'll only happen when their income no longer covers buying off their conscience.
What exactly is broken in Firefox on YouTube or Gmail? I'm not a heavy Firefox user, but when I try it every once in a while, I've never noticed anything broken on any of the Google's sites, even Maps and Docs worked just fine - but then, maybe I wasn't trying out some advanced features.
Do they only break for users in the United States? I've been using Firefox exclusively since about 2.0, and I've never had any of the Google projects break for me. Not once. Yet this seems to be a pretty common problem around here.
In my experience, YouTube in-video links (those that e.g. show thumbnails of videos and are pointed at by vloggers with their fingers) have been broken in Firefox for years. Not that I complain, I've always been annoyed by those. I think it took me 2 years to realize that after seeing vloggers point at invisible things. I had to double-check in Chrome. (I just assumed that they forgot to add the links when editing.)
Those links don't appear on my Android-based Youtube clients that come with my cable box and Sony TV. The vlogger's hand points to nothing. It's amazing how second-class Youtube on Android TV is, maybe someone at Google hates testing it. I still manage to use it for about 75% of my Youtube usage. My wife and I are definitely going through a Youtube phase, which will probably end when the ads reach a tipping point.
True, a browser is now necessary in these cases. However, my government and banks have strong incentives to make their services accessible and not break usability.
Yes, as long as a requirement is something those corps control, they'll essentially control you.
The answer is to ignore support for gmail and youtube and hope their reputation catches up to them - specifically youtube alienating creators and becoming yet another corporate on-demand television-clone.
I feel ya. I'm refactoring my CSS parser the third time now, because I needed to support nested media queries (and therefore the logical condition "spec").
It's amazing how complex even CSS has gotten. And implementing HTML without the ISO SGML spec and the SGML handbook is close to impossible.
So many edge cases, so much layouting overhead, and so much damned flow root types.
Honestly in the beginning I thought "how hard can it be, it's just like XML"... I was so wrong about this.
This is exactly it. I looked at doing something like this, ok, I looked at doing this and decided early to forgo CSS because it's so very very complicated.
Truthfully, if my skillset had better matched the task I might have been lured into giving it a go. On another project I considered using CSS as a styling technology and got down to really understand it in detail. I then realized how good my 1,000 yard vision really is.
I can't speak to the forces which inevitably lead to decay but essentially, for some reason every technology tries to eat the world.
CSS is trying to eat the world and HTML5 is trying to eat the world and of course Javascript is famously trying to eat the world.
All these technologies (and here's the part where I see my post begin to fade to gray) are on their way to experiencing technology's version of societal collapse.
I use that analogy because it's so so apropo.
They are overwrought, overly complex systems yielding only marginally better results and being maintained at huge cost in terms of attention, brain power and collateral damage by everyone. The benefits accrue to a smaller and smaller number of people (FANG et. al) who do not have society's best interest at heart at all and are very far from the founding principals which inspired the original vision.
A simple scriptless HTML 1.0 browser minus the blink tag would deliver at least to me nearly 100% of the benefit I get from the web which can be characterized as "seeing what is happening, seeing what other people think, learning new stuff and downloading stuff".
I would love to start a (reactionary) movement away from the current web composed of a privacy-preserving HTML 1.0 browser capable of HTTPS and people dedicated to creating pages and resources for it. I don't know of any such "movement" .
If anyone is aware of anything like this do share.
> If anyone is aware of anything like this do share.
Well, I actually tried to start a movement for that [1].
The idea is to offload as much as possible to trusted peers, and to refine the web with a trust model where the user has to trust a website specifically to deliver expected things from the user's side (e.g. a news website should have no right to shove videos down your throat).
I also think that a lot of web browsers tackle the privacy problem wrong. "User Privacy" is not sending a user-agent to a server, or downloading a resource from it in a statistically easily detectable manner.
Real privacy is not having to download anything from the web server at all, by offloading requests to its peers. In my Browser [2] I'm trying to have every metadata, configuration or observation (and extraction) federated. I believe that the real strength of peer-to-peer is not decentralization; it is federation and liberation.
Except (half kidding but only half kidding here) augmented for things like cat videos; previous generations watched tv for entertainment, and for many people that has been partially or entirely replaced by the internet.
I've been arguing this for a while: I want a browser that deliberately cannot run random scripts; only from a vetted source.
That source will contain stuff like autocomplete and htmx or similar.
If something doesn't work with that it asks you if you want to enable "bloat mode - warning: unsafe" and I the user chooses it enables a full browser engine.
NoScript goes a pretty long way to being what you want, while still being usable. You can whitelist scripts per-domain, so you can still block the garbage while making the site functional.
A good suggestion, but LibreJS is about blocking non-trivial JavaScript which isn't Free Software, it's not about vetting specific JavaScript files. A malicious website can declare its JavaScript to be Free Software, and LibreJS will then permit that script to run.
> Maybe if enough of these "minimal browsers" show up, people might even realise that basic HTML and CSS is quite sufficient for a lot of things and start creating simpler, more efficient sites, thus dissolving the monopoly.
I feel like some initiative to establish a super 'light' version of the html/css specs might be a very good thing...browsers are so far gone out of the hands of individual coders or even small teams now because of their complexity.
This is something I had been pondering. Rather than making a new 'standard' based on a new technology and thus expanding the standards base with complexity, we can go the other way.
Make a specification based on old tried and tested tech. Thinking something like a set HTML/CSS specs that can be implemented fairly easily. It is a spec that is essentially something a website can be built to knowing that end browsers/users can anticipate being able to render. It doesn't need anything new to be added into current browsers but it is simply enough that others can built their browser too.
A vague standard I figured would be that a single person should be able to implement the full spec from the ground up in about one years full time work. If done in a group in a free/open manner it could theoretically be done quicker. That said there is the old joke. Two programmers can do in two months what one programmer can do in one month.
Though I think you're restricted in feature selection more by reality than by specifications. CSS is designed with assumption of designer competence, which modern webdev struggles to achieve, a better strategy is to replace CSS with semantic markup and this way integrate with user styles.
Piggybacking on this: there's a lot of redundancy in the Web specs, so it may actually be fruitful to start with features which are more expressive/consistent and leave the older ones to polyfills.
For example, if we want JS then it might be worth tackling that first, and bootstrapping the rest (e.g. rendering to one big canvas to begin with).
The way I see it is that for many people web applications have replaced their need for desktop applications while giving a somewhat comparable experience. I think it would be a very hard sell for the average user to want to use a "minimal browser" because it could mean giving up a lot of day to day functionality they rely on.
My dream browser would have two engines -- a simple, very fast engine for "document-like" web pages and a more complex engine that can be loaded on demand using a UI similar to NoScript.
Some disclaimers: I am not an expert on Chromium or even web development and this is all quite ignorant and subjective.
But I have noticed that plain text is sometimes very slow to render in Chromium browsers (Chrome, Edge, Vivaldi) under heavy load. The issue may to be related to a process-per-browser-tab architecture: a bloated and fragmented block of memory attached to a tab/process can’t be easily freed / allocated. So if you’re on a tab that was previously loading lots of stateful JS, then switch to plain text, Chromium might get stuck in memory management for the tab instead of short-circuiting its architecture by allocating memory solely for the page.
I am not at all an expert here and don’t know how Chromium works under the hood. I don’t think it’s literally one-process-per-tab, I just have a vague sketch of what the problem might be here. But I think “idiots like nicklecompte have 700 tabs open and complain that tab 361 doesn’t load plain text quickly” is a problem that’s very difficult to solve in general, even if a dedicated “simple” engine might offer a lot of case-specific fixes.
honestly, I have been toying with the idea of building a web browser project that has no JavaScript. just completely ignores the script tag or any JavaScript related things. like onclick, and other attributes.
Lightweight sandboxed applications in the browser. No admin password needed since there's nothing to install.
Sure it may be bloated overkill for something that could have just been a plain text file, but it is nice to have real time and interactive visualizations.
Plus it is nice to have the choice to use mobile websites instead of using the access hungry native apps.
Figma, docs, sheets. All great apps I can use for any device with a web browser. No install needed, no updates needed. Easy to collaborate with. No "install this application" just "open this url"
Application delivery though the web browser is just hella convenient. It's really hard to give that up for some "native apps and static documents" utopian dream.
The idea of "web applications" is almost as old as the Web itself. Java Web Applets, Flash, Silverlight,... were all attempts to bring application functionality to the browser. Billions have been invested in this strategy.
Why?
Because desktop computing used to be a battlegrounds for commercial vendors in the late 20th century, and the goal was establishing market dominance. Being able to control who can run what on a platform was / still is part and parcel towards establishing that goal.
Web browsers changed the game. They are a threat and an opportunity at the same time. A threat because gave anyone a chance to escape from a native context and run whatever you want in a browser regardless of the platform your on. No more having to compile and distribute the same application for a dozen potential targets.
Microsoft was so adamant on having Explorer bundled with their OS in order to establish control over the future evolution of web applications on the information highway. And they got famously burned for it in that 1999 anti-trust case.
Application delivery as you know it today is convenient, but that came at a price. Vast amounts of resources have been poured into Chromium over the past two decades to bring that experience to billions. And it didn't happen out of sheer altruism on the part of Google.
The fact that web applications are so old indicates the demand for such a delivery platform.
> Microsoft was so adamant on having Explorer bundled with their OS in order to establish control over the future evolution of web applications on the information highway. And they got famously burned for it in that 1999 anti-trust case.
Totally. Microsoft was using it to try control the web as a Microsoft platform. Hence their push for ActiveX over flash/applets/javascript.
> Vast amounts of resources have been poured into Chromium over the past two decades to bring that experience to billions. And it didn't happen out of sheer altruism on the part of Google.
At the time Chrome was started it was a more or less altruistic move from Google, from the user's perspective at least. Google was heavily reliant on the web for income and existing browser were slow, had widely varying standard support, and lots of security issues. Chrome forced their hands, by showing that a web browser can be fast and "secure."
Also at the time Google have a significant platform of their own. They would be at the mercy of the platform gatekeepers. So pushing an open platform that anyone can publish on was in their own interest.
Since then Android has taken off and Chrome has morphed into arguable spyware, but at its inception it was a good thing for users.
> Application delivery as you know it today is convenient, but that came at a price.
A price to whom though. To those who would try to lock down our platforms and seek rent over application delivery? I guess I don't really care about how much it costs them ;).
You can disable JavaScript by default for some time and then look at the list of websites and apps where you enabled it again. It has been a very frustrating experience to me.
I use Firefox with uMatrix to basically block any JavaScript that I don't explicitly enable. Twenty years ago, browsing the internet without JavaScript was fairly easy. But these days, it seems like upwards of 60% of websites simply don't work at all without JavaScript.
I wonder if there are any projects that help people find sites that don't use JavaScript. Search engines that only index JavaScript-free sites, old school "blog rings" that only link JavaScript free sites together.
> someone who has been (slowly) working on one myself.
Well... that's kind of exciting! Is your work public? Would love to see it.
> maybe it isn't really that important to follow them so closely... especially if you're aiming for something more like an actually-user-friendly (i.e. with the UI and controls you actually want, not some designer's flavour-of-the-month) hypertext document viewer than a web application runtime/OS. From that perspective, even HTML4+CSS2 would probably be quite sufficient.
If choose not to comply with specs and stay in the experimental land, why not invent the world wide web from ground up? We don't need to use the bloated HTML / CSS / JavaScript / HTTP, designing brand new protocols, reimagining how people connect through the internet, and building clients / servers for them is super fun
Come on, everybody, this deserves a huge round of applause, for encouragement and for the effort. We need a lot more projects of this kind; we need some options for rebooting the browser scene from down below, however limited in scale and scope and functionality. I stick to my Firefox, crippled and bloated as it is these days, but I know full well that sooner or later it's going to fold, and I shall be stranded inside the Google nightmare. NetSurf shouldn't have to shoulder the complete oppositional burden all by itself.
There's Dillo too, but unfortunately last time I checked neither of them were even as close to usable on most sites as Opera 9.x (which was already many years old at the time, but had been quite popular for a while before that.)
And furthermore, when are we getting open source Presto? I know Opera stopped developing it around 2013, but there's still probably plenty a FOSS browser could salvage.
(I hear the source is floating around somewhere, but without a free license it's (unfortunately) likely to attract problems.)
I remember that they made an special version just for me. An update had removed a feature i used a lot.
A while later I helped them track down an error they couldn't find the cause of. They asked how they could thank me and i replied "bring back feature x"
For a little while they put up a link to a "Tony-edition" on the official download site until they got the feature back in the regular release. I still have it somewhere.
Hotdog was my first html editor. Brilliant is was. Happy memories
I was anticipating clicking on it hoping for the source code - because at the time I was writing an HTML editor myself (as a young teen) and was blown away by some of what HotDog could do. Would love to peer into some of those older projects.
I had hoped this as well. I spent so much time in that program building web sites long ago. We have a plethora of excellent dev oriented editors today, but a purpose built one can be so much better by simplifying workflow.
Written in Go. This is presumably good for security.
Curious that the components are named ketchup, mayo, mustard, sauce, bun, and gg. There's an obvious omission here, although it has the advantage of being vegan friendly.
Also, somewhat related: it's a pity the Servo project is going nowhere. I don't mean to put this project down, but Servo was the only realistic shot at a truly new, truly usable Free Software browser.
Servo isn't even close to being production ready. Every one seems to make Servo out to be something it wasn't. Anyways the Linux Foundation picked up the project, so your dream may come true some day.
Servo was primarily a test bed for Firefox and all the components that they wanted to get into Firefox eventually made it.
Sweet! I’ve also been working on my own (toy) browser from scratch following https://browser.engineering - would definitely recommend to anyone who wants to get started on their own, granted it’s still a work in progress.
This is a fun learning project, but I don't think it's really a browser except in name. The code seems to implement the behavior someone thinks a browser might have, not what the specs actually say.
That's not an html parser. It's just some code that regexes strings looking for brackets.
Projects like this are good ways to learn and have fun, but they're many years away from being a browser, even if we limit the scope to the specs of say 2012.
Also fwiw, if you're implementing a browser use the web platform tests instead of writing your own:
> ketchup (html parser and DOM Tree builder)
mayo (css parser and Render Tree builder)
mustard (UI Toolkit, events and OpenGL)
sauce (requests, cache and filesystem)
bun (css layout calculator)
Off topic, but I've encountered some more serious projects that used strange nouns to name their components. I've never figured out why, but it usually results in me spending more time cross-checking what each component does. Can anyone comment?
Because it's fun and if you work regularly with the codebase it makes no difference.
Organizations adopt terms that aren't strict descriptors of the things they refer to all the time, stuff like the Johnny.Decimal system, someone will refer to "form 4.54" or "being compliant with 13485" and everyone will know what it refers to even though it's a perfectly opaque name to outsiders.
Engineers think they are being cute when they do this. In reality it hampers new devs because you can't just look at a component and have an idea of what it's doing. I've been on projects like this and after a year I was still second-guessing myself.
I used to do this. Now I use 'boring' names. My funny joke is not very funny anymore spin 2 years on (even then it was only slightly funny). Then you end up with a lookup chart for what does what. When if I had just used 'normal boring' names in the first place I would not be trying to remember what object 'funnyname' does vs 'movedatatodb' object name.
I upvoted not because I agreed (I found the naming conventions distracting and inconsistent) but because why the fuck was your perfectly innocuous comment downvoted?
I am not a vegan, but I have had vegan (not imitation meat!) sausages that were absolutely delicious! But I admit the likely connotation is pretty meaty
Interesting they've got a screenshot of SerenityOS.org at the end of their home page, curious if their looking at SerenityOS's home grown LibWeb [1] C++ implementation for inspiration in their Go code-base?
SerenityOS is another Indie OS effort which plans to build the entire POSIX OS, Kernel and core Apps from scratch, one of the Apps their is their LibWeb browser engine complete with their own LibJS JS VM which already passes the vast ECMAScript test suite [2]. One of its USPs of SerenityOS is being able to make changes to its code-base and instantly reboot the OS in seconds with the changes, never seen this done for an OS before, the turn around time allows for some impressive dev iteration speed.
Andreas videos on developing LibWeb/LibJS is one of the best resources I've found explaining how to implement a web browser, e.g. in this video he goes through the HTML Specs which have enough info in them to develop a HTML spec parser whose behavior is the same across all browser engines:
Most of the interesting parts of LibWeb/LibJS is captured on video that has a unique skill of being able to write code really quickly whilst explaining each step. There must be close to 100 videos on implementing different parts of the Web Browser on his YouTube channel, e.g:
Ultimately, pursuing a full-featured graphical web browser might not be the right approach for a volunteer-/hobby-driven project to develop a web browser. But it would absolutely make sense that a newly implemented terminal-based browser in Go (or any other memory-safe language) could/should replace Lynx and w3m.
It's actually pretty surprising that there hasn't been a niche surge in websites specifically meant to work well in textmode browsers, considering how many programmers claim to spend 90% of their time inside either a terminal window or in a web browser. Even now on HN I'm typing this in Firefox. And Sourcehut, too is widely acclaimed for being "simple" and/or "minimal", but it's not exactly great (or even clear) to look at in the two browsers I just checked.
This is a comment advocating for Gemini (which is from the start a misguided endeavour) with no concrete examples of how the simple use case mentioned above (the straightforwardness (or not) of trying to use Sourcehut from the terminal) is actually solved in Gemini space.
I’m a bit disappointed by the naming convention. There is mayo and sauce components. Mayo is not a typical hotdog condiment and sauce seems a bit generic. And then there is gg.
The takeaway for me comes from when I tried the first of those links, and then pressed back to come back to HN, which Chrome took embarassingly-long to render.
i thought it was related to the old hotdog code editor. think i used that to learn html back in the early noughties.. better, simpler days - i really miss them.
It's possibly merely a coincidence. That was over 20 years ago and the author has another project "mustard". Guy looks to be somewhere under 30 from the photos I can find.
What an odd name to hit twice in the browser world.
Then again, there's Viola, Cello, and Vivaldi though ... so who knows, maybe the population size is way larger than I pretend it is.
If you ever decide to actually go somewhere with it, maybe a lightweight alternative of Electron might be a good path to take.
In particular a browser engine that allow the programmer to easily turn off things that are not used and take away from performance will probably be a real killer.
How come entire operating systems are able to be developed on community-supported efforts (like Debian) but browsers can only be developed by monoliths like Google and Apple (and Mozilla, which has ~750 employees and makes most of its money from Google)?
In practice, many of the components used in desktop Linux distributions are developed by an IBM subsidiary called "Red Hat" which has 12,000 employees. Systemd, GNOME, PulseAudio, X and Wayland, all of these are primarily maintained by Red Hat employees operating under various shell organizations.
Chrome is based on WebKit, which itself is descended from KHTML. That was essentially a little widget for formatting help pages in KDE and only had a handful of developers. In fact, it was chosen as the basis for WebKit because it was minimalistic clean code.
KHTML spend all of its life as a full browser engine, namely for KDE Konqueror browser. The KHTML predecessor khtmlw was a simple widget rendering lib.
The only general purpose, open source operating system that can now compete with the commercial giants, Linux, is now mainly driven by corporate backed contributors. If all corporate backed efforts were to stop overnight, I doubt it could keep up with MacOS/Windows/Android with just community contributions.
The entire hardware/software industry is controlled by giants. Disrupting this can no longer be done by a group of hackers as it was possible in the late 80s and early 90s.
What's there to "keep up with"? UI-wise, desktop operating systems have been pretty much stagnant for a decade, and software-compatibility-wise, people already weren't expecting Linux to run all their newfangled commercial software.
First Debian is a distro composed of several third party software. The closest equivalent to browsers in scope in Xorg and that has gone in maintaince for being unmaintainable. Web browsers could be simplified if designed differently(internal architecture). We browser are monolithic programs but with stuff like nodejs, gjs, web view, pwa, etc. , Parts of browser must really become system library (that has some stability) and daemon that run on login. Aka the solution to the bloated web is modularisation of certain components.
Interesting idea, but what would be the advantage of having, for instance, the javascript engine running as a system daemon? This reminds me of Windows Script Host, which was some similar idea but didn't make it. What we see instead are JITs or VMs such as the JVM, node or any other language (such as raku or julia) which just work standalone, in the same way as a browser does.
Is engine prob is better of a s a system lib. The Daemon are for stuff like service workers from pwa, they are already boardline apps(notifications for example). The biggest advantage of the system lib style is you can effectively share with native program. It also standard cross platform. A not so bloated nodejs and electron setup.
I agree with the system library, and this should be standard. A system library and a system daemon are two different concepts, thought. HTML engines as shared libraries were common in the past (thinking of KHTML or mshtml.dll) and still are (thinking of Qt WebEngine or WebKitGtk). They are not that popular any more because devs want to have 100% control about the concrete engine version and quirks, so they prefer shipping their own engine, as with Electron. I feel that there is a strong rejection against Electron-like application development/shipping.
It seems fair enough to me to warn people about rude words. There are still some people who are offended by them (to be fair, I can't imagine many of them are here), and there are quite a lot on that page.
The browser is far from stable, spec-compliant, or even really useful, but, I'm slowly working on bringing more features and supporting more sites.
When the specs are constantly churning in order to keep one gigantic company's browser an effective monopoly, maybe it isn't really that important to follow them so closely... especially if you're aiming for something more like an actually-user-friendly (i.e. with the UI and controls you actually want, not some designer's flavour-of-the-month) hypertext document viewer than a web application runtime/OS. From that perspective, even HTML4+CSS2 would probably be quite sufficient.
Maybe if enough of these "minimal browsers" show up, people might even realise that basic HTML and CSS is quite sufficient for a lot of things and start creating simpler, more efficient sites, thus dissolving the monopoly. I realise it's going to be extremely difficult to fight corporate interests, but one can hope and dream...
https://qht.co/item?id=25915313