SQL can do anything as of recursive CTEs and application-defined functions with side-effects. Performance and ergonomics are the only remaining concerns.
This is getting dangerously close to how some AAA MMORPGs handle[d] much of their logic and state management.
At the scales these games operate, enterprisey oracle clusters start to look like a pretty good solution if you don't already have some custom tech stack that perfectly solves the problem.
I started playing World of Warcraft at the same time I was studying database systems at university and had a similar curiosity. Twenty years later the AzerothCore project pretty much satisfies this curiosity, they've done an incredible job reverse engineering the game server and its database.
That’s fascinating. I didn’t realize the WoW server was so database heavy. do you know if the original game logic was implemented mostly in stored procedures, or was it just used for persistence and the engine handled the rules elsewhere?
It's not, no. The data you see in these files is reconstituted from the data that shipped with the game client, but they're not a perfect match for the real data.
The game servers are all C++ and don't use stored procedures for general gameplay management. They do handle inventory management so that item duping is generally not possible, and complex things like cross-server character transfer use stored procedures.
I was hoping this was an April Fool's thing, but unfortunately AWS doesn't play like that.
I'll be in the market for a new email provider around Christmas. I'll let the competition fight it out for several months. I don't think looking for a replacement on the deprecation announcement day is good timing. I already see zoho and friends with marketing materials about this.
well I was about to migrate to AWS from smartermail (which completely ruined my trust with their handling of their disastrous zero day) when I read this. I am in the market for a replacement too.
Looking for a provider handling custom domains, aliases, enabling to send emails from an alias and with an API to create/delete aliases programmatically, with a decent webmail + ActiveSync or equivalent for easy smartphone config + push.
So far it seems to me that the only options are Exchange 365 and Google. But I fear those will be over engineered and super complicated to set up.
> Mass-produced drones today are a simple airframe, a lawnmower engine, and the smarts of a cell phone. Ukraine has people making them in basements. Presumably, so does Iran.
The ships the LCS are intended to replace are significantly more capable at absorbing damage from this type of threat. If you are willing to go up to destroyer class, you are probably approaching immunity for this scenario.
> Former CIA intelligence officer Robert Finke said the blast appeared to be caused by C4 explosives molded into a shaped charge against the hull of the boat.[6] More than 1,000 pounds (450 kg) of explosive were used.[7] Much of the blast entered a mechanical space below the ship's galley, violently pushing up the deck, thereby killing crew members who were lining up for lunch.[8] The crew fought flooding in the engineering spaces and had the damage under control after three days. Divers inspected the hull and determined that the keel had not been damaged.
I'm convinced one of my org's repos is just haunted now. It doesn't matter what the status page says. I'll get a unicorn about twice a day. Once you have 8000 commits, 15k issues, and two competing project boards, things seem to get pretty bad. Fresh repos run crazy fast by comparison.
Copilot on OAI reveals everything meaningful about its functionality if you use a custom model config via the API. All you need to do is inspect the logs to see the prompts they're using. So far no one seems to care about this "loophole". Presumably, because the only thing that matters is for you to consume as many tokens per unit time as possible.
The source code of the slot machine is not relevant to the casino manager. He only cares that the customer is using it.
"Batteries included" ecosystems are the only persistent solution to the package manager problem.
If your first party tooling contains all the functionality you typically need, it's possible you can be productive with zero 3rd party dependencies. In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography. These are beacons for attackers. Everything depends on this stuff so the motivation for attacking these common surfaces is high.
I can literally count on one hand the number of 3rd party dependencies I've used in the last year. Dapper is the only regular thing I can come up with. Sometimes ScottPlot. Both of my SQL providers (MSSQL and SQLite) are first party as well. This is a major reason why they're the only sql providers I use.
Maybe I am just so traumatized from compliance and auditing in regulated software business, but this feels like a happier way to build software too. My tools tend to stay right where I left them the previous day. I don't have to worry about my hammer or screw drivers stealing all my bitcoin in the middle of the night.
There are several issues with "Batteries Included" ecosystems (like Python, C#/.NET, and Java):
1. They are not going to include everything. This includes things like new file formats.
2. They are going to be out of date whenever a standard changes (HTML, etc.), application changes (e.g. SQLite/PostgreSQL/etc. for SQL/ORM bindings), or API changes (DirectX, Vulcan, etc.).
3. Things like data structures, graphics APIs, etc. will have performance characteristics that may be different to your use case.
4. They can't cover all nice use cases such as the different libraries and frameworks for creating games of different genres.
For example, Python's XML DOM implementation only implements a subset of XPath and doesn't support parsing HTML.
The fact that Python, Java, and .NET have large library ecosystems proves that even if you have a "Batteries Included" approach there will always be other things to add.
"Batteries included" means "ossification is guaranteed", yah. "stdlib is where code goes to die" is a fairly common phrase for a reason.
There's clearly merit to both sides, but personally I think a major underlying cause is that libraries are trusted. Obviously that doesn't match reality. We desperately need a permission system for libraries, it's far harder to sneak stuff in when doing so requires an "adds dangerous permission" change approval.
100% to libraries having permissions. If I'm using some code to say compute a hash of a byte array, it should not have access to say the filesystem nor network.
But also everyone sane avoids the built-in http client in any production setting because it has rather severe footguns and complicated (and limited) ability to control it. It can't be fixed in-place due to its API design... and there is no replacement at this point. The closest we got was adding some support for using a Context, with a rather obtuse API (which is now part of the footgunnery).
There's also a v2 of the json package because v1 is similarly full of footguns and lack of reasonable control. The list of quirks to maintain in v2's backport of v1's API in https://github.com/golang/go/issues/71497 (or a smaller overview here: https://go.dev/blog/jsonv2-exp) is quite large and generally very surprising to people. The good news here is that it actually is possible to upgrade v1 "in place" and share the code.
There's a rather large list of such things. And that's in a language that has been doing a relatively good job. In some languages you end up with Perl/Raku or Python 2/3 "it's nearly a different language and the ecosystem is split for many years" outcomes, but Go is nowhere near that.
Because this stuff is in the stdlib, it has taken several years to even discuss a concrete upgrade. For stuff that isn't, ecosystems generally shift rather quickly when a clearly-better library appears, in part because it's a (relatively) level playing field.
This looks like an ad for batteries included to me.
Libraries also don't get it right the first time so they increment minor and major versions.
Then why is it not okay for built-in standard libraries to version their functionality also? Just like Go did with JSON?
The benefits are worth it judging by how ubiquitous Go, Java and .NET are.
I'd rather leverage billions of support paid by the likes of Google, Oracle and Microsoft to build libraries for me than some random low bus factor person, prone to be hacked at anytime due to bad security practices.
Setting up a large JavaScript or Rust project is like giving 300 random people on the internet permission to execute code on my machine. Unless I audit every library update (spoiler: no one does it because it's expensive).
Third party libraries have been avoiding those json footguns (and significantly improving performance) for well over a decade before stdlib got it. Same with logging. And it's looking like it will be over two decades for an even slightly reasonable http client.
Stuff outside stdlib can, and almost always does, improve at an incomparably faster rate.
.NET's JSON and their Kestrel HTTP server beg to differ.
Their JSON even does cross-platform SIMD and their Kestrel stack was top 10/20 on techempower benchmarks for a while without the ugly hacks other frameworks/libs use to get there.
stdlib is the science of good enough and sometimes it's far above good enough.
And I think the Go people seem to do a fairly good job of picking out the best and most universal ideas from these outside efforts and folding them in.
Libraries don't get it right the first time, but there are often multiple competing libraries which allows more experimentation and finding the right abstraction faster.
For me, the v2 re-writes, as well as the "x" semi-official repo are a major strength. They tell me there is a trustworthy team working on this stuff, but obviously not everything will always be as great as you might want, but the floor is rising.
Another downside of a large stdlib, is that it can be very confusing. Took my a while how unicode is supposed to work in go, as you have to track down throughout the APIs what are the right things to use. Which is even more annoying because the support is strictly binary and buried everywhere without being super explicit or discoverable.
I'm not sure I understand. Why would a standard library, a collection of what would otherwise be a bunch of independent libraries, bundled together, be more confusing than the same (or probably more) independent libraries published on their own?
please! nobody uses Xpath (coz json killed XML), it RDF (semantic web never happened, and one ever 10years is not fast), schema.org (again, nobody cares), PNG: no change in the last 26 years, not fast. the HTML "living standard" :D completely optional and hence not a standard but definition.
XPath 1.0 is a pain to write queries for. XPath 2.0 adds features that make it easier to write queries. XPath 3.1 adds support for maps, arrays, and JSON.
And the default Python XPath support is severely limited, not even a full 1.0 implementation. You can't use the Python XPath support to do things like `element[contains(@attribute, 'value')]` so you need to include an external library to implement XPath.
XPath is used in processing XML (JATS and other publishing/standards XML files) and can be used to proces HTML content.
RDF and the related standards are still used in some areas. If the "Batteries Included" standard library ignores these then those standards will need an external library to support them.
Schema.org is used by Google and other search engines to describe content on the page such as breadcrumbs, publications, paywalled content, cinema screenings, etc. If you are generating websites then you need to produce schema.org metadata to improve the SEO.
Did you notice that a new PNG standard was released in 2025 (last year, with a working draft in 2022) adding support for APNG, HDR, and Exif metadata? Yes, it hasn't changed frequently, but it does change. So if you have PNG support in the standard library you need to update it to support those changes.
And if HTML support is optional then you will need an external library to support it. Hence a "Batteries Included" standard library being incomplete.
comparing to Node, .NET is batteries included: built-in Linq vs needing lodash external package, built-in Decimal vs decimal.js package, built-in model validation vs class-validator & class-transformer packages, built-in CSRF/XSRF protection vs csrf-csrf package, I can go on for a while...
That's my point. You can have a large standard library like those languages I mentioned, but that isn't going to include everything nor cover every use case, so you'll have external libraries (via PyPi for Python, NuGet for .NET, and Maven for Java/JVM).
depends, JavaScript in the Browser has many useful things available, which I miss with python, e.g., fetch, which in Python you need a separate package like requests to avoid a clunky API. Java had this issue for long time as well, since Java 11 there is the HttpClient with a convenient API.
> In practice you will tend to have a few, but you won't be vendoring out critical things like HTTP, TCP, JSON, string sanitation, cryptography
Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.
Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.
What type of developer chooses UX and performance over security? So reckless.
I removed the locks from all the doors, now entering/exiting is 87% faster!
After removing all the safety equipment, our vehicles have significantly improved in mileage, acceleration and top speed!
>What type of developer chooses UX and performance over security? So reckless.
Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.
While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.
Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.
Even more wild to read that sarcasm about "removing locks from doors for 87% speedup" is considered extreme...
And yes, we agree that running unconstrained AI agents with --dangerous-skip-confirm flags and seeing nothing wrong with it is insane. Kind of like just advertising for burglars to come open your doors for you before you get home - yeah, it's lots faster to get in (and to move about the house with all your stuff gone).
Better developer UX can directly lead to better safety. "You are holding it wrong" is a frequent source of security bugs, and better UX reduces the ways you can hold it wrong, or at least makes you more likely to hold it the right way
> Better developer UX can directly lead to better safety.
Depends. If you had to add to a Makefile for your dependencies, you sure as hell aren't going to add 5k dependencies manually just to get a function that does $FOO; you'd write it yourself.
Now, with AI in the mix, there's fewer and fewer reasons to use so many dependencies.
Friction is helpful. Putting seatbelts on takes more time than just driving, but it’s way safer for the driver. Current dev practices increase speed, not safety.
"Security" is often more about corporate CYA than improving my actual security as a user, and sometimes in opposition, and there is often blatant disregard for any UX concession at all. The most secure system is fully encrypted with all copies of the encryption key erased.
Scala could be one example? When I upgraded to a newer version of the standard library (the Scala 2.13 or Scala 3 collections library), there was a tool, Scalafix [1], that could update my source code to work with the new library. Don't think it was perfect (don't remember), but helpful.
Personally I've heard Odin [1] to do a decent job with this, at least from what I've superficially learned about its stdlib and included modules as an "outsider" (not a regular user).
It appears to have things like support for e.g. image file formats built-in, and new things are somewhat liberally getting added to core if they prove practically useful, since there isn't a package manager in the traditional sense.
Here's a blog post by the language author literally named "Package Managers are Evil" [2]
(Please do correct me if this is wrong, again, I don't have the experience myself.)
Because native fetch lack retries, error handling is verbose, search and body serialization create ton of boilerplate. I use KY http client, small lib on top of fetch with great UX and trusted maintainer.
It doesn't matter. We pulled axios out of our codebase, but it still ends up in there as a child or peer from 40 other dependencies. Many from major vendors like datadog, slack, twilio, nx (in the gcs-cache extension), etc...
Fetch has also lacked support for features that xhr has had for over a decade now. For example upload progress. It's slowly catching up though, upload progress is the only thing I'd choose xhr for.
That would show how quickly the data is passing into the native fetch call but doesn’t account for kind of internal buffer it might have, network latency etc
That is a way to approximate it, though I'd be curious to know the semantics compared to xhr - would they both show the same value at the same network lifecycle of a given byte?
I have never consciously wrapped Axios or fetch, but a cursory search suggests that there was a time when it was impossible for either to force TLS1.3. It's easy to imagine alternate implementations exist for frivolous reasons, but sometimes there are hard security or performance requirements that force you into them.
AI was trained on Axios wrappers, so it's just going to be wrappers all the way down. Look inside any company "API Client" and it's just a branded wrapper around Axios.
I'm not sure fetch is a good server-side API. The typical fetch-based code snippet `fetch(API_URL).then(r => r.json())` has no response body size limit and can potentially bring down a server due to memory exhaustion if the endpoint at API_URL malfunctions for some reason. Fine in the browser but to me it should be a no-no on the server.
You can pass to `fetch` an `AbortSignal` like `AbortSignal.timeout(5000)` as a simple and easy guard.
If you also want to guard on size, iterating the `response.body` stream with for/await/of and adding a counter that can `abort()` a manual `AbortSignal` is relatively straightforward, though sounds complicated. You can even do that as a custom `ReadableStream` implementation so that you can wrap it back into `Response` and still use the `response.json()` shortcut. I'm surprised I'm not seeing a standard implementation of that, but it also looks straightforward from MDN documentation [1].
> I'm not sure fetch is a good server-side API. The typical fetch-based code snippet `fetch(API_URL).then(r => r.json())` has no response body size limit and can potentially bring down a server due to memory exhaustion if the endpoint at API_URL malfunctions for some reason. Fine in the browser but to me it should be a no-no on the server.
Nor is fetch a good client-side API either; you want progress indicators, on both upload and download. Fetch is a poor API all-round.
Browser fetch can lean on the fact that the runtime environment has hard limits per tab and the user will just close the tab if things get weird. on the server you're right
I'm not saying that axios is unmaintained, I'm saying that if you want something like axios from the standard lib, fetch is the closest thing you get to official
It doesn't have a need _now_. Axios is more than 10 years old now, and even before axios other libraries did the same utility of making requests easier
Batteries included systems are still susceptible to supply chain attacks, they just move slower so it’s not as attractive of a target.
I think packages of a certain size need to be held to higher standards by the repositories.
Multiple users should have to approve changes. Maybe enforced scans (though with trivy’s recent compromise that wont be likely any time soon)
Basically anything besides lone developer can decide to send something out on a whim that will run on millions of machines.
While technically true, it's so much slower that it's essentially a different thing. Third party packages being attacked is a near daily occurrence. First party attacks happens on the timescale and frequency of decades.
It's like the difference in protecting your home from burglars and foreign nation soldiers. Both are technically invaders to your home, but the scope is different, and the solutions are different.
> they just move slower so it’s not as attractive of a target.
Well, there’s other things. Maven doesn’t allow you to declare “version >= x.y.z” and doesn’t run arbitrary scripts upon pulling dependencies, for one thing. The Java classpath doesn’t make it possible to have multiple versions of the same library at the same time. That helps a lot too.
NPM and the way node does dependency management just isn’t great. Never has been.
I agree with you and follow the same principles myself, but JavaScript already has HTTP, and yet everyone still uses Axios. So the problem isn't that JS doesn't have batteries, it's that people don't want to use them for some reason.
I'm guessing it's similar to the tragedy of the commons phenomenon. When things are freely available people tend to overuse or carelessly use them. NPM is just too easy to use. If a package offers a 1% ergonomics increase over a builtin function, many folks will just go for it because it costs them nothing (well, it seems to cost them nothing).
The other thing that keeps coming up is the github-code-is-fine-but-the-release-artifact-is-a-trojan issue. It really makes me question if "packages" should even exist in JavaScript, or if we could just be importing standard plain source code from a git repo.
I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.
> I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.
Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.
Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(
Yes - the postinstall hook attack vector goes away. You can do SHA pinning since Git's content addressing means that SHA is the hash of the content. But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.
This is basically what Deno's import maps tried to solve, and what they ended up with looked a lot like a package registry again.
At least npm packages have checksums and a registry that can yank things.
You can just git submodule in the dependencies. Super easy. Also makes it straightforward to develop patches to send upstream from within your project. Or to replace a dependency with a private fork.
In my experience, this works great for libraries internal to an organization (UI components, custom file formats, API type definitions, etc.). I don't see why it wouldn't also work for managing public dependencies.
Plus it's ecosystem-agnostic. Git submodules work just as well for JS as they do for Go, sample data/binary assets, or whatever other dependencies you need to manage.
> But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.
The irony is that this is actually the current best practice to defend against supply chain attacks in the github actions layer. Pin all actions versions to a hash. There's an entire secondary set of dev tools for converting GHA version numbers to hashes
This is where attestation/sigstore comes into play. Github has a first-party action for it and I wish more projects would use it. Regarding javascript specifically, I believe npm has builtin support for sigstore.
or you don't use a package manager where anyone can just publish a package (i.e. use your system package manager). There is still some risk, but it is much smaller. Like, if xz were distributed by PyPI or NPM, everyone would have been pwned, but instead it was (barely) found.
It's true that system repos doesn't include everything, but you can create your own repositories if you really need to for a few things. In practice Fedora/EPEL are basically sufficient for my needs. Right now I'm deploying something with yocto, which is a bit more limited in slection, but it's pretty easy to add my own packages and it at least has hashes so things don't get replaced without me noticing (to be fair, I don't know if the security practices of open-embedded recipes are as strong as Fedora...).
it's muddying what a package is. A package, or a distro, is the people who slave and labor over packaging, reviewing, deciding on versions to ship, having policies in place, security mailing lists, release schedules, etc.
just shipping from npm crap is essentially the equivelant of running your production code base against Arch AUR pkgbuilds.
Fully agree with this! I think today .NET is probably the most batteries included platform you can get. This means that even if you use third-party libraries, these typically depend only on first-party dependencies, making it much less likely for something shady to sneak in.
C#'s LINQ (code as data, like LISP) wins over golang for any type of data access. Strongly-typed, language-native queries. Go has its own advantages though.
So, youre on Microsoft then, judging by ScottPlot you write .NET desktop apps. If you use Dapper, you probably use Microsoft.Data.SqlClient, which is... distributed over NuGet and vulnerable to supply chain attack. You may not need many deps as a desktop dev. Modern day line of business apps require a lot more deps. CSVHelper, ClosedXML, AutoMapper, WebOptimizer,
NetEscapades.AspNetCore.SecurityHeaders.
Yes less deps people need the better but it doesn't fix trhe core problem. Sharing and distrib uting code is a key tenant of being able to write modern code.
This is a rather superlative and tunnel vision, "everything is a nail because I'm a hammer" approach. The truth is this is an exceedingly difficult problem nobody has adequately solved yet.
I think the AI tooling is, if not completely solving sandboxing, at least making the default much better by asking you every time they want to do something and providing files to auto-approve certain actions.
Another layer of AI tooling is the cost of spinning up your own version of some libraries is lowered and can be made hyper specific to your needs rather than pulling in a whole library with features you'll never use.
> Another layer of AI tooling is the cost of spinning up your own version of some libraries is lowered and can be made hyper specific to your needs rather than pulling in a whole library with features you'll never use.
Tell me about it. Using AI Chatbots (not even agents), I got a MVP of a packaging system[1] to my liking (to create packages for a proprietary ERP system) and an endpoint-API-testing tool, neither of which require a venv or similar to run.
------------------------------
[1] Okay, all it does now is create, sign, verify and unpack packages. There's a roadmap file for package distribution, which is a different problem.
> at least making the default much better by asking you every time they want to do something
Really? I thought 'asking you every time they want to do something' was called 'security fatigue' and generally considered to be a bad thing. Yes you can concatenate files in the current project, Claude.
I agree that dependencies are a liability, but, sadly, "batteries included" didn't work out for Python in practice (i. e. how do I even live without numpy? No, array aren't enough).
To the extend that Python is indeed "batteries included," that seems true. But just how "batteries included" is it? I'd argue that its batteries are pretty limited. Exhibit A: everybody uses the third-party requests instead of the stdlib urllib. Exhibit B: http.server isn't a production-ready webserver, so people use Flask or something beefier.
I'd contrast Python with Go, which has an amazing stdlib for the domains that Go targets. This last part is key--Go has a more focused scope than Python, and that makes it easier for its stdlib to succeed.
> http.server isn't a production-ready webserver, so people use Flask [...]
Nit, but relevant nit: Flask is also not a production-grade webserver. You could say it is also missing batteries ... and those batteries are often missing batteries too. Which is why you don't deploy flask, you deploy flask on top of gunicorn on top of nginx. It's missing batteries all the way down (or at least 3 levels down).
Appreciate the nit. Had no idea that Flask wasn't production-grade. Yeesh.
I really don't miss this part of the Python world. When I started on backend stuff ~10 years ago, the morass of runtime stuff for Python webservers felt bewildering. uWSGI? FastCGI? Gunicorn? Twisted? Like you say, missing batteries all the way down, presumably due to async/GIL related pains.
Then you step into the Go world and it's just the stdlib http package.
Anyway, ranting aside, batteries included is a real thing, and it's great. Python just doesn't have it.
This just moves the trust from one group to another. Now the standard library/language maintainers need to develop/maintain more high quality software. So either they get overworked and burn out, don't address issues, fail to update things or they recruit more people who need to be trusted. Then they are responsible for doing the validation that you should have done. Are they better equipped to do that? Maybe they go, oh hey, Axios is popular and widely trusted, let's make it an official library and bring the maintainers into the fold... wait isn't this exactly where we started?
What process did you trust the standard library/language maintainers in the first place? How do they differ from any other major library vendor?
What are some examples of batteries-included languages that folk around here really feel productive in and/or love? What makes them so great, in your opinion?
(Leaving aside thoughts on language syntax, compile times, tooling etc - just interested in people's experiences with / thoughts on healthy stdlibs)
I work in a NIS2 compliance sector, and we basically use Go and Python for everything. Go is awesome, Python isn't as such. Go didn't always come with the awesome stllib that it does today, which is likely partly why a lot of people still use things like Gin for web frameworks rather than simply using the standard library. Having worked with a lot of web frameworks, the one Go comes with is nice and easy enough to extend. Python is terrible, but on the plus side it's relatively easy to write your own libraries with Python, and use C/Zig to do so if you need it. The biggest challenges for us is that we aren't going to write a better MSSQL driver than Microsoft, so we use quite a bit of dependencies from them since we are married with Azure. These live in a little more isolation than what you might expect, so they aren't updated quite as often as many places might. Still, it's a relatively low risk factor that we can accept.
Our React projects are the contrast. They live in total and complete isolation, both in development and in production. You're not going to work on React on a computer that will be connected to any sort of internal resources. We've also had to write a novel's worth of legal bullshit explaining how we can't realistically review every line of code from React dependencies for compliance.
Anyway, I don't think JS/TS is that bad. It has a lot of issues, but then, you could always have written your own wrapper ontop of Node's fetch instead of using Axios. Which I guess is where working in the NIS2 compliance sector makes things a little bit different, because we'd always chose to write the wrapper instead of using one others made. With the few exceptions for Microsoft products that I mentioned earlier.
We used to have some C# but we moved away from it to have fewer languages and because it was a worse fit for us than Go and Python. I'm not sure .NET would really give us any advantages though. Microsoft treats most major languages as first class citizens in Azure, and since we build everything to be sort of platform agnostic, we wouldn't have the tie-ins that you could have with .NET. I'm not saying it would be fun to switch cloud, but all our services are build so that there is a decoupled "adapter" between our core logic and Azure. We use a lot of Azure functions as an example, but they run in container apps on a managed k8s, so the Azure function part is really just an ingress that could be swapped for anything else.
It's been a while since I worked with an "actual" function app in Azure. We did have a few .NET ones that weren't using containers. At the time they were pretty good, but today I'm not sure what the benefit over a managed container envrionment with container apps would be. Similarily with sqlserver. We use it because of governance and how it ties into data factory and I guess fabric, but we don't use ORM's so something like Entity Framework wouldn't really be something we'd benefit from with .NET.
I think the only thing we couldn't realistically replace and get something similar is the governance, but that's more to do with how Management Groups, Policies, Subscriptions and EntraID works than anything else.
Eventuallyt everything will probably be Python and then C/Zig for compute heavy parts. Not because Python is great, it's terrible, but it's what everyone uses. We're an energy company and with the internal AI tools we've made widely available we now have non-SWE employees writing code. It's Business Intelligence, it's Risk analysys, it's powerplant engineers, it's accountants. They're all working with AI code in their sandboxed environments and it's all Python. Since some of it actually turns out to generate great value, it's better for us (and the business) if our SWE teams can easily take over when "amateur hour" needs to meet operational compliance for the more "serious" production envrionments. I put things in "'s because I'm still not entirely sure how to express this. A lot of what gets build is great, and would have never been build without AI because we don't have the man power, but it's usually some pretty bad software. Which is fine, until it isn't.
These are the big ones I use, specifically because of the standard libraries:
Python (decent standard library) - It's pretty much everywhere. There's so many hidden gems in that standard library (difflib, argparse, shlex, subprocess, cmd)
C#/F# (.NET)
C# feels so productive because of how much is available in .NET Core, and F# gets to tag along and get it all for free too. With C# you can compile executables down to bundle the runtime and strip it down so your executables are in the 15 MiB range. If you have dotnet installed, you can run F# as scripts.
Do you worry at all about the future of F#? I've been told it's feeling more and more like a second-class citizen on .NET, but I don't have much personal experience.
I used to, but the knowledge of .NET seems mostly transferrable to C#. It's super useful to do `dotnet fsi` and then work out the appropriate .NET calls in the F# repl.
While it's true that the packages are first party, .NET still relies on packages to distribute code that's not directly inside the framework. You still probably transiently depend on `Microsoft.Extensions.Hosting.Abstractions ` for example - if the process for publishing this package was compromised, you'd still get owned.
This is exactly the world I'm working towards with packaging tooling with a virtual machine i.e. electron but with virtual machines instead so the isolation aspect comes by default.
For a lot of code, I switched to generating code rather than using 3rd party libraries.
Things like PEG parsers, path finding algorithms, string sanitizers, data type conversion, etc are very conveniently generated by LLMs. It's fast, reduces dependencies, and feels safer to me.
Lol. My most recent comment before this one is here: https://qht.co/item?id=47583593. You judge if AI threatens my identity. But hey, don't let the facts get in the way of a slick narrative.
Or find the best third party library and copy the code from a widely used version that has been out long enough to have been well tested into your source tree.
The problem is not third party libraries. It is updating third party libraries when the version you have still works fine for your needs.
Don't do this. Use a package manager that let's you specify a specific version to pin against. Vendoring side steps most automated tooling that can warn you about vulnerabilities. Vendoring is a signal that your tooling is insufficient, 99% of the time.
Vendoring means you don't have to fetch the internet for every build, that you can work offline, that you're not at the mercy of the oh-so-close-99.999 availability, that it will keep on working in 10 years, and probably other advantages.
If your tooling can pull a dependency from the internet, it could certainly check if more recent version from a vendored one is available.
This is only true if you aren’t internally mirroring those packages.
Most places I’ve worked have Artifactory or something like it sitting between you and actual PyPI/npm/etc. As long as someone has pulled that version at some point before the internet goes out, it’ll continue to work after.
And this is exactly why we see noise on HN/Reddit when a supply-chain cyberattack breaks out, but no breach is ever reported. Enterprises are protected by internal mirroring.
> Is there any package manager incapable of working offline?
I think you've identified the problem here: package management and package distribution are two different problems. Both tools have possibilities for exploits, but if they are separate tools then the surface area is smaller.
I'm thinking that the package distribution tool maintains a local system cache of packages, using keys/webrings/whatever to verify provenance, while the package management tool allows pinning, minver/maxver, etc.
Honestly, you can get pretty far with just Bun and a very small number of dependencies. It’s what I love most about Bun. But, I do agree with you generally. .NET is about as good as I’ve ever seen for being batteries included. I just hate the enterprisey culture that always seems to pervade .NET shops.
I agree about the culture. If I take my eye off the dev team for too long, I'll come back and we'll be using entity framework and a 20 page document about configuring code cleanup rules in visual studio.
Not at all. We simply need M-of-N auditors to sign off on major releases of things. And the package managers need to check this (the set of auditors can be changed, same as browser PKI for https) before pulling things down.
That's the system we have in our Safebox ecosystem
> "Batteries included" ecosystems are the only persistent solution
Or write your own stuff. Yes, that's right, I said it. Even HTTP. Even cryptography. Just because somebody else messed it up once doesn't mean nobody should ever do it. Professional quality software _should_ be customized. Professional developers absolutely can and should do this and get it right. When you use a third-party HTTP implementation (for example), you're invariably importing more functionality than you need anyway. If you're just querying a REST service, you don't need MIME encoding, but it's part of the HTTP library anyway because some clients do need it. That library (that imports all of its own libraries) is just unnecessary bloat, and this stuff really isn't that hard to get right.
> When you use a third-party HTTP implementation (for example), you're invariably importing more functionality than you need anyway. If you're just querying a REST service, you don't need MIME encoding, but it's part of the HTTP library anyway because some clients do need it. That library (that imports all of its own libraries) is just unnecessary bloat, and this stuff really isn't that hard to get right.
This post is modded down (I think because of the "roll your own crypto vibe", which I disagree with), but this is actually spot on the money for HTTP.
The surface area for HTTP is quite large, and your little API, which never needed range-requests, basic-auth, multipart form upload, etc suddenly gets owned because of a vulnerability in one of those things you not only never used, you also never knew existed!
"Surface area" is a problem, reducing it is one way to mitigate.
> the "roll your own crypto vibe", which I disagree with
Again, you run into the attack surface area here. Think about the Heartbleed vulnerability. It was a vulnerability in the DTLS implementation of OpenSSL, but it affected every single user, including the 99% that weren't using DTLS.
Experienced developers can, and should, be able to elide things like side-channel attacks and the other gotchas that scare folks off of rolling their own crypto. The right solution here is better-defined, well understood acceptance criteria and test cases, not blindly trusting something you downloaded from the internet.
1. It's really really hard to verify that you have not left a vulnerability in (for a good time, try figuring out all the different "standards" needed in x509), but, more importantly,
2. You already have options for a reduced attack surface; You don't need to use OpenSSL just for TLS, you can use WolfSSL (I'm very happy with it, actually). You don't need WolfSSL just for public/private keys signing+encryption, use libsodium. You don't need libsodium just for bcrypt password hashing, there's already a single function to do that.
With crypto, you have some options to reduce your attack surface. With HTTP you have few to none; all the HTTP libs take great care to implement as much of the specification as possible.
That's actually not really crypto, though - that's writing a parser (for a container that includes a lot of crypto-related data). And again... if you import a 3rd-party x.509 parser and you only need DER but not BER, you've got unnecessary bloat yet again.
Frankly inventing a new language is irresponsible these days unless you build on-top of an existing ecosystem because you need to solve all these problems.
> Recursive CTEs use an iterative working-table mechanism. Despite the name, they aren't truly recursive. PostgreSQL doesn't "call itself" by creating a nested stack of unfinished queries.
If you want something that is more like actual recursion (I.e., depth-first), Oracle has CONNECT BY which does not require the same kind of tracking. It also comes with extra features to help with cycle detection, stack depth reflection, etc.
If your problem is aligned with the DFS model, the oracle technique can run circles around recursive CTEs. Anything with a deep hierarchy and early termination conditions is a compelling candidate.
> If you want something that is more like actual recursion (I.e., depth-first), Oracle has CONNECT BY which does not require the same kind of tracking. It also comes with extra features to help with cycle detection, stack depth reflection, etc.
All that is supported with CTEs as well. And both Postgres and Oracle support the SQL standard for these things.
You can't choose between breadth first/depth first using CONNECT BY in Oracle. Oracle's manual even states that CTE are more powerful than CONNECT BY
Cats have completely deleted the rabbit populations in a lot of suburbia. I feel like it got worse around 2020 for some reason. I had to move to the middle of the woods to start seeing them again.
Got lots of rabbits in my town, on a tiny nature reserve beside a footpath that goes from some office complexes to an industrial estate. It's ten minutes walk from the houses where people keep cats. I guess all those fluffy neutered cats have dedicated their attention to actual cat food and to the sport of infringing on the territories of other cats, and just aren't very rabbit-centric. If the cats were feral and breeding the rabbits might be in trouble.
reply