I come from a time when internet connectivity was not permanent.
It was only available a few times per day when you connected via the phone line. My first ISP gave me an allowance of 20 hours of internet per month.
You would dial-up, check the news, check your email, read a page or two, download what you had to download, and then disconnect.
The internet was very slow by today's standards, and the connection would get lost very often.
It was during that time when it was drilled into my head that the network access comes and goes.
That it should not be taken for granted.
So a lot of the stuff that I use nowadays, I also have in an offline format.
I keep offline docs either in pdf or in html format of most of the programming languages and frameworks that I use.
I keep the source code of various projects that are essential to me.
I keep a local wiki with notes on various things that are useful to me.
Obviously it's not enough for a major catastrophe but it's better than nothing.
I'm by no means a prepper, but I also believe that each of us should be prepared for short term disruptions of various kinds. The network should not be taken for granted.
I love that the Tex Live distribution comes with thousands and thousands of well-written manuals in PDF format; I often end up reading them when on a plane.
I'd love something like Kiwi designed to be like modern online-sharing software like Box etc where it just caches stuff until your drive is mostly full, deleting as necessary.
I travel a lot and do the same. Yes most places have internet. But I don't need much. And it's easier to have an "offline" folder with docs you need compared to carrying around a satellite dish. Also works in an airplane.
Mine contains language, library and game engine docs. Sometimes I back up some sites completely. But it's getting harder to do that as many sites block crawling now.
> have in an offline format. I keep offline docs either in pdf or in html format of most of the programming languages and frameworks that I use. I keep the source code of various projects that are essential to me.
This is such a good idea. Thanks. I'm going to start to do the same.
> I keep a local wiki with notes on various things that are useful
I've been using Zim Wiki for years; back then there was nothing better available and now I can't be bothered to migrate formats. Plus I've already contributed a bunch of plugins to Zim :)
20 hours? My first internet (actually not even internet, it was called eWorld) gave you 4 hours a month… which actually was ok because there wasn’t much to do on it, and you couldn’t go long without someone in your family accidentally picking up the phone anyway, and everyone would be mad if you kept the phone line busy for very long, too.
Yeah, that is normal for me too. If i find any article that i think is interesting i use SingleFile to download a local copy and ytdlp to download any video i find interesting/informational (e.g tutorials/howtos/etc). I avoid cloud-based stuff, preferring to use local/desktop software instead (and 99.9% of it is open source). And when it comes to AI i use local models only - with inference engines written in C++ (to avoid the dependency hell that is Python - which for some reason seems 100x worse when it comes to AI projects too).
And yeah, i have downloaded Wikipedia (in ZIM format) :-P
It isn't really for some doomsday preparation reason, it is just that sometimes the internet doesn't work (it doesn't happen often but it does happen) or i do not have internet access for whatever reason or stuff simply disappears/changes.
In fact just yesterday night i wanted to lookup how something is done in Bash and after trying to search for it, i noticed my Internet wasn't working (it took ~1h to resume, it was quite late in the night). So i just started a local LLM and asked that instead :-P (i do have the info manuals for bash - and other stuff - installed but they are a PITA to search if you don't know exactly what you're looking for).
One thing that annoys me though is that it is basically impossible to have an offline copy of a modern Linux distro. Sometime during the late 2000s i bought the full set of Debian DVDs, but Debian stopped providing ISOs years ago. Of course with how big distros are nowadays you'd probably need something like 100 DVDs :-P. At least there is Slackware.
Well the point of this is to have a distro that contains "everything" (or at least a large number of stuff) since i can't know ahead of time what i'd need.
I think it is still possible to use jigdo to make Bluray disks, but i do not have a bluray drive :-P
9front a good bunch offline software repos from shithub can fit under a 4GB pendrive or DVD.
Not very usable to run current day stuff, but you have both netsurf, a video player, audio players, a PS/EPUB/PDF/image viewer, doc/xls readers to TXT (and converters) and Unix tools and games among 8/16 bit emulators.
With a bit of thinkering it can do a lot, look at the plan9 desktop page with 9front.
There's a Golang port too, and the AWK guide can be a godsend.
This is not for anuclear winter but maybe for an internet outage, which can be a real threat.
I recently re read Walkaway and it made me yearn for an offline-first internet, where every computer is a node, and nodes are constantly refreshing each other's cache when they get the chance (the network works), but otherwise are basically mirroring much of the internet.
The postcode doesn't tell the whole story. But what you can do is use an IP geolocation service which should narrow down your location enough, so that typing in the entire address is no longer necessary.
I.e. using something like https://ipinfo.io/json and then typing in a full postcode and street name + number should work well in most cases.
IP geolocation is increasingly not useful for anything, especially for mobile users. The best it can do is give you the correct country and maybe get you in the right region.
That link nailed me perfectly. I'm on my phone. Connected to wifi, like most people probably are. Chilling in bed or on the toilet.
If you're on cell service.. yeah probably less accurate. Not sure if it makes the form harder to fill out if you have to change some of the fields.
What I've started doing for my personal app though is I've added a "guess" button. It fills in the form using heuristics but it's opt in. Fills out like 10 fields automatically and I've tuned it so it's usually right, and when it isn't correcting a few is still quicker.
I work for IPinfo. The accuracy you see is inferred data actually. Our IP address location should not perfectly pinpoint anyone, unless that IP address is a data center of some sort. The highest accuracy for a non-data center IP address is usually at the ZIP code level. In terms of carrier IP addresses, currently we do one data update per day. If we did more, I guess the accuracy of mobile IP addresses would improve, but on an overall scale, it would be quite miniscule.
Our country-level data (which is free) is 10-15 times larger than the free/paid country-level data out there. We constantly hear that the size of the database is an issue. The size is a consequence of accuracy in the first place. So, it is a balancing act.
I work for IPinfo. Has our data been inconsistent for you? We actually invest heavily and continuously in data accuracy. I think for hosting IP addresses we are nearing the highest level of accuracy possible, especially with data center addresses. We are investing in novel, cutting-edge research for carrier IP geolocation.
I am curious about your experience with us so far.
What if I order something on the road and want it delivered to my home? Or what if I want to order something over mobile? My mobile IP is often 1500km away from where I live.
Autofill solves all of that with an implementation cost that approaches zero.
Amazing how an entire profession that until yesterday would pride itself on precision, clarity (in thought and in writing), efficiency, and formality, has now descended into complete quackery.
I can understand the benefit from XML if there is a at least a three-level variable structure to share with the LLM. If there is strong consistency in a repeated three or more level structure, then JSON ought to be sufficient. If there is just a one or two level structure, it feels like unnecessary quackery, possibly reflective of a poorly trained model if the structure is a genuine necessity.
A comment on libxml, not on your work:
Funny how so many companies use this library in production and not one steps in to maintain this project and patch the issues.
What a sad state of affairs we are in.
About a day after I resigned as maintainer, SUSE stepped in and is now maintaining the project. As announced here [1], I'm currently trying a different funding model and started a GPL-licensed fork with many security and performance improvements [2].
It should also be noted that the remaining security issues in the core parser have to do with algorithmic complexity, not memory safety. Many other parts of libxml2 aren't security-critical at all.
> For the duration of the fellowship, one “maintainer-in-residence” will be employed up to full-time (32-40 hours per week) as part of the Sovereign Tech Agency team.
> This option offers the maintainer the personal and professional advantages of being part of team, as well as the stability of being employed to continue working on critical FOSS infrastructure.
> This position is only available for maintainers located in Germany,
Yeah I agree, maintaining OS projects has been a weird thing for a long time.
I know a few companies have programs where engineers can designate specific projects as important and give them funds. But it doesn't happen enough to support all the projects that currently need work, maybe AI coding tools will lower the cost of maintenance enough to improve this.
I do think there are two possible approaches that policy makers could consider.
1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
2) Many governments have tried to create cyber reserve corps, I bet they could designate people as maintainers of key projects that they rely on to maintain both the projects as well as people skilled with the tools that they deem important.
> 1) There could probably be tax credits or deductions for SWEs who 'volunteer' their time to work on these projects.
Why exclusive to SWEs? They tend to be more time-restricted than financial-restricted (assuming the "SWE" comes from a job description). I'd be more interested in making sure that those with less well-paying jobs are able to access such benefits rather than stacking it onto those already (probably) making 6-figures.
Of course, the problems arise in the details. Define "volunteer": if $DAYJOB also uses it (in a way related to my role), is it actually, instead, wage theft? Also, quantifying the benefit is a sticky question. Is maintaining 10k emoji packages on NPM equivalent to volunteer work on libcurl? Could it ever be? Is it volunteer work if it ends up with a bug bounty payday? Google's fuzzing grant incentives?
we need a tax on companies using or selling anything OSS, the funds of which go into OSS, the wealth it generated is insane, and it's nearly all just donations of experts
Which is approximately all companies because all companies use software and depending on what the researchers look at, 90% to 98% of codebases depend on OSS.
Conclusion: support OSS from general taxation, like the Sovereign Tech Fund in Germany does. It's a public good!
OSS is allowed to make money and there are projects that require paid licenses for commerical use.
The source is available and collaborative.
Qt states this on their site:
Simply put, this is how it works: In return for the value you receive from using Qt to create your application, you are expected to give back by contributing to Qt or buying Qt.
There is nothing in the open source licensees that prevents charging money, in fact, non-commercial clauses are seen as incompatible with the Debian Free Software Guidelines.
And there is a lot of companies out there that make their money based on open source software, red hat is maybe the biggest and most well known.
I meant in the sense that someone else can redistribute the source for free, not that the company has to do it.
> The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
Feels more like you don’t understand the concept of the tragedy of the commons.
EDIT: Sorry, I’ve had a shitty day and that wasn’t a helpful comment at all. I should’ve said that as I understand it TOTC primarily relates to finite resources, so I don’t think it applies here. Sorry again for being a dick.
Seems like a compliance thing? I too run my LLMs inside some sort of containment and does "manual" development inside the same environment, but wouldn't make sense to have that containment remotely, so I'm guessing that's because they need to have some sort of strict control over it?
While there are compliance/security benefits it is not the primary motivation.
If you have fairly complicated infrastructure it can be way more efficient to have a pool of ready to go beefy EC2 instances on a recent commit of your multi-GB git repo instead of having to run everything on a laptop.
Amazon developers use similar devboxes. I think it is mostly so that developers can use a production-like Linux environment with integrated Amazon dev tooling. You're not required to use a devbox, but it can be easier and more convenient than running stuff on your laptop.
The FOSDEM speakers are sent emails to review and approve the video recording (this involves rudimentary stuff like reviewing the start and end time, if the automated system didn't get it right; choosing one of the three audio channels etc). The recordings that have been reviewed and approved would be online by now.
Look forward to ye olde uncle Lennart's old-timey sales pitch.
I'm gonna summarize the Varlink talk: DBus is, and I quote, "very very very complex" and his system with JSON for low-level IPC is, in fact, the best thing since sliced bread and has no significant flaws. It works basically just like HTTP so the web people will love it. Kernel support for more great shit pending! I'm not sure where the hardon for a new IPC system with lernel (keeping that typo) support is from, but he's been trying for 15 years now. AFAICT, the service discovery problem could be solved by a user space service without much trouble. I mean if the whole thing wasn't an exercise in bad technological taste.
Varlink is based on much more conventional UNIX technology than Dbus, which is decades old: You connect to a named UNIX socket through its socket file in the filesystem (man page: unix(7)).
This is an old mechanism and it is known to work well. It does not require a broker service, it works right at system startup, and it does not require a working user database for permission checks (which would be a circular dependency for systemd in some configurations). If at all, I am surprised that systemd didn't use that earlier.
The main thing that Varlink standardizes on top of that is a JSON-based serialization format for a series of request/response pairs. But that seems like a lightweight addition.
It also does not require kernel support to work, the kernel support is already there. He mentioned in the talk that he'd like to be able to "tag" UNIX sockets that speak varlink as such, with kernel support. But that is not a prerequisite to use this at all. The service discovery -- and he said that in the talk as well -- is simply done by listing socket files in the file system, and by having a convention for where they are created.
I do not share your view of old timey sales pitch, at least for the talk about systemd nspawn OCI container support.
If anything, that talk was a tad low effort, with even dismissive answers — "Yes" and "No?" as full answers to audience questions, with no follow up?! Still very informative though!
The Varlink talk really was very salesy for a Fosdem presentation. Shouldn't be long until the recording becomes available, feel free to tell me I was wrong after watching it.
It's mainly re-hashed. I think I've seen the same talk twice before? At least once.
It's a very "I've made a cool thing. This is what I think is cool about it" type of talk. Which I don't think is uncommon for FOSDEM.
Maybe a bit uncommon for a higher profile figure like Lennart.
> It's mainly re-hashed. I think I've seen the same talk twice before? At least once.
He held a similar talk at All Systems Go I think (I missed the talk here at FOSDEM).
> It's a very "I've made a cool thing. This is what I think is cool about it" type of talk.
Varlink isn't something he just made up, he mearly "adopted it" (started making use of it). It existed before, but I don't know anything that really made use of it before.
The official-looking website at https://varlink.org doesn't give any information about who the authors are, as far as I can tell, but the screenshots show the username "kay". There's a git repo for libvarlink [1] where the first commits (from 2017) are by Kay Sievers, who is one of the systemd developers.
An announcement post [2] from later in 2017, by Harald Hoyer, says that the varlink protocol was created by Kay Sievers and Lars Karlitski in "our team", presumably referring to the systemd team.
So the systemd developers "adopted" their own thing from themselves?
While I guess you aren't wrong, I also wouldn't say you are entirely correct that Kay is a systemd developer. He use to work on udev, but hasn't been active in any meaningful way on it for 2 years before varlinks release[1]. For what it was made I can't really say, but Lennart hadn't start integrating Varlink until a while after its release (I think I remember it being like 2021 or so when he started making use of it, after another check it seems the start of varlink stuff in systemd was 2019[2]).
Kay Sievers' Wikipedia page cites a blog post by Lennart Poettering [1] which says that systemd was designed in "close cooperation" with Kay Sievers and that Harald Hoyer was also involved, so it seems pretty clear that he's on the team that develops systemd, the team that Harald Hoyer referred to as "our team". All three of them gave a talk [2] together in 2013 about what they were developing.
If Lennart Poettering "adopted" varlink, he seems to have done so from members of his own team ("our team") who created varlink and who are also fellow co-creators of systemd.
Hehe, I'm eagerly waiting for this one as well as I'd be extremely happy to replace some hack to run docker images with `systemd-nspawn` served from the nix store.
reply