Agreed, OpenAI and Anthropic want to get as close to the user as possible. Browser is used more often than a specific website or standalone desktop app and much less work than an entire OS. Raycast also seems well positioned but perhaps more niche.
Perhaps Atlassian was sitting on cash and needed to make some bets. If you can build a big enough user base for a browser it can earn handsomely from AdWords type referral fees. Look at what Google pays Apple to be default on Safari and how much referral spend Chrome recouped for Google etc. Maybe Atlassian will try and promote Dia to its customer base and look to launch more AI type commercial product discovery experiences like Perplexity Shopping.
There's always something hot to hate on. I'm getting flashbacks to the days where everyone was going "nosql is trash" because they all cargo culted to mongodb back in the day and then tried to do olap on it.
Heck, this isn't even the first time SOA has been in the frame of hate. Member SOAP? Member how everyone more or less jumps between doing everything on the server vs everything on the client every year? We've been having that battle ever since networking or things clients have been a thing, the "sportsification" of that dichotomy is older than I am.
I like the other person's take in this thread about how you can largely get the semantic benefits of micro services by making a well crafted monolith. I honestly agree, but I think the followup is "aren't people who poorly break up service boundaries going to go do regardless if the interface is a network endpoint or a function call/dependency injected class?"
> "aren't people who poorly break up service boundaries going to go do regardless if the interface is a network endpoint or a function call/dependency injected class?"
Today there are several tools we can use that enforces boundaries in a monolith (Spring Modulith for example). If the project uses one of these tools, it is harder to accidentally cross boundaries and you get many of the same benefits as you get with a micro service.
The big advantage is that if you find out that you made a mistake, your only dependencies are within the same service and is easier to refactor. In a micro service oriented architecture, changes might impact several services and teams that needs to be coordinated. I'm not saying that refactoring a monolith can't be time consuming, but you have at least a better control of the flow of data between modules.
An interesting angle would be how to get out of this mess.
Here's one: In politics we see national and supranational governments take on the job of making large decisions. Style guides and best practices on whether to use microservices or not are, at best, made at a company level.
Would it be interesting if these architecture decisions were made, or at least kept in check by a body that is larger than a company?
Perhaps we could have some laws that dictate that you must (at least partially) understand how software works before you can buy it. Especially for government contracts.
The second you think you are smarter and more informed than the people on the ground is the second you revealed yourself to be a fool.
Because having spent decades working for government and enterprises I can assure you that we aren't all stupid and need laws to protect us from ourselves. Instead we are often placed in really challenging and unique circumstances that drive the architecture and design of what gets built.
For example micro-services often works well in places where you have distributed teams that for security, governance or logistical reasons need to work independently and can't collaborate effectively with a monolith architecture. Or where certain components e.g. authentication, payments need a hard separation from the rest of the platform.
Obviously not all in government are stupid, I was not implying this. I was rather suggesting that some are, and that something might be done to avoid problems with that.
Also, microservices can be great, but when used in the wrong context, obviously are not. We are not discussing that either.
Edit: perhaps you may have misunderstood my comment, and that may be due to my clumsy way of stating it. I was actually suggesting that companies learn from governments.
Am not money conscious in that regard. Own a full suite of Apple products and have no issue paying the premium for them.
The issue is with the intentional misleading of people to think their device is slowing down when they just need a battery replacement. At Apple’s scale it’s significant and dishonest towards its customers.
When you buy a Porsche, you are not paying an inflated price because you are getting performance. You are paying an inflated price because of the economies surrounding the car - Porsche makes and sells a lower number of cars, so it needs to have a higher profit margin per car, however to justify that cost it needs something else, which is namely the badge (since nobody really uses the performance on those cars on a daily bases to get to work faster).
Fundamentally though, because people who buy these cars aren't buying them for robustness, Porsche sees no reason to invest in making things reliable, since that would cut into their profit margins, and since there aren't really significant complaints about reliability, they don't have issues, and in turn can sell parts at a premium because it is accepted that if you are rich enough to afford a Porsche, you are rich enough to pay a premium for upkeep.
Apple is exactly the same way with electronics. You get slightly more spec wise (if even TBH), and pay a inflated price because of the branding. And once you buy into the Apple ecosystem, you play by their rules, which means that you replace your device with a new one if you want to have the best performance.
Of course its not optimal, and for that reason people who are value-conscious don't buy Apple products or Porsche cars.
This is great! Congrats and thanks for sharing your work! People have been asking for dark mode for years. This has it and so many other nice UI improvements too. Well done!
Linear.app is awesome. We switched from Jira and our team's participation on issues went through the roof. It's fast, clean, has keyboard shortcuts and replaces agile terminology with simpler, familiar paradigms. We activated Slack, Sentry and GitLab integrations too. It's a beautiful piece of software to use and part of the reason we chose it was to inspire ourselves to try build something of similar quality in our own domain. Seriously, it's that good.
I've also used Redmine a few years ago. It gets the job done but nowhere near as slick and fun to use.
I hate how the elegant simplicity behind the agile development principles was so thoroughly co-opted by scrum, so now when people think “agile” they actually mean scrum but have no idea that’s what’s happening.
I hate how when people say scrum they really mean something which is not well defined and you end up with some weird system that only some "scrum master" understands and that they seem to be making up as they go along.
Scrum is mostly defined. The problem with scrum is that while agile (in general, not only software dev) is a methodology (a set of practices and tools) to be applied in a particular context, scrum takes those agile methods, packages into a behemoth with a very particular workflow and sells that instead of typical "consulting" - deployment of tools to control the process.
In practice scrum is extremely unagile, because there is one very particular workflow to be used and actual workflows have to be adapted to suit scrum. Sometimes it leads to increased agility, but more often than not it does not.
Probably the central piece of scrum are sprints. Nothing wrong with sprints in general, but sprints rely on feature sizes being sprint-sized. Sprints only work when you actually finish planned stuff over the course of a sprint. Naturally, feature and especially release sizes vary. By focusing on milestones one can actually plan and schedule work and releases. This is agile - you can shuffle stuff around. Instead scrum tries to sell fixed size sprints to give impression of steady movement forward, but this decouples sprints from milestones and slightly counter-intuitively reduces agility - shuffling milestones around sprints merely gives you a bunch of unfinished milestones, but also a bunch of finished sprints. But variable sized sprints based on milestones sound too much waterfall-y when the selling point is departure from waterfall.
Interestingly this attempt to squeeze features into sprint-sized chunks is one of the reasons for technical debt that you then must somehow manage, simply because squeezing features comes at a cost of technical debt. There are more reasons for technical debt, but it is slightly humorous when a tool aimed at managing/reducing technical debt introduces debts of its own.
A bit tangential to scrum, but Facebook's story of Hack/HHVM is well-known story somewhat indicative of this. When you size features according to wall-clock sized release cadence (instead of sizing release cadence according to feature sizes) you inevitably accumulate technical debt - new features should build on top of past features, but deficiencies in past features prevent new features from being built on. There are more or less 3 ways this plays out: 1. actual feature release cadence drops due to increased development weight; 2. release cadence drops due to fixup "features" being released; 3. release cadence drops due to dedicated to fixing. Do not get me wrong, technical debt accumulates and impacts release cadence any way, but with feature sized sprints this does not come out of nowhere.
Over the years teams (usually informally) understood deficiencies of scrum process and started throwing pieces of the process away, shuffling them around and having weird mess of system. This is not due to scrum not being well defined, but rather scrum being a hammer in search of nails.
Agile methodologies generally came out of manufacturing - process based workflow. Scrum takes those practices and packages them as a project management tooling. The whole point of agility in processes is the ability to adapt in the middle of the process. Project management, on the other hand, needs plannability and progress tracking. There are weird intersections between the two and IMO scrum fails to satisfy both - neither it is good at adapting mid cycle (because scrum checkpoints mid-feature), nor is it good at future plannability (because it focuses on short term goals). I have seen scrum get distorted in two major ways: either "sprints" get stretched into months long waterfalls, or sprints mostly reshuffling priorities in "in progress" pile and checkpointing progress.
Then cite the definition? If you will cite the https://scrumguides.org/ - I have yet to work in any team that uses scrum that follows this even slightly.
I think this is going to be the natural side-effect. I remember seeing Monzo posting salary ranges of £40k - £100k for certain positions. So for the right person they'll make a plan but they can also lure in more candidates and low-ball them when making an offer at the end of the process.
> Maybe ultimately you open up spam fighting to your users. If you managed this well, you could harness a lot of energy.
Doesn't Google already consider that if a user returns to the results page (or clicks a second link) then the first link visited was not satisfactory. Seems like a pretty elegant solution.
The blog post doesn't say much more than the headline. I'm curious about the specifics of what could of actually happened here.
In my limited experience working with CDNs wouldn't you just cache the responses of unique URLs and have some sort of cookie check at the edge before serving it.
So my own app would request something like /api/account?id=123 with my own id in there.
How would you end up getting other people's data in your app if your app only calls that unique URL?
It's pretty easy to imagine an API having an endpoint like GET /api/account/mine, which is implicitly parameterised by the user ID associated with that session. Or even a 'list' rather than a 'lookup' endpoint, like GET /api/messages, which fetched all private messages associated with the authenticated user – or whatever the equivalent private information would be, in Klarna's 'domain'.
Edit: If the other commenter is correct, then it's less bad than I imagined. Or rather it would at least only be triggered, seemingly, if someone deliberately and maliciously requested something that didn't 'belong' to them.
I see, so the it's likely that a generic URL pattern (like your example) was accidentally included in their caching rules.
I guess I didn't think of that originally because I thought if you wanted to cache some kind of response data like this, why on earth would you use a generic URL? But perhaps they probably didn't intend to cache this endpoint.
> How would you end up getting other people's data in your app if your app only calls that unique URL?
From what I understand what happened with this outage, the CDN would still cache /api/account?id=123, and someone with account ID 234 could access it by altering the URL to retrieve the cached version, if account 123 has used the app recently.
That's because a CDN has (usually) no concept of authorization/authentication and can't make decisions that /api/account?id=123 shouldn't be served to someone other than the owner of account 123.
It would be less catastrophic (at least from a PR point of view) because people wouldn't get immediately served others' accounts, but you'd be vulnerable to attack.
Yeah, I've just realised they've probably accidentally included a generic URL in the cache rules that they actually didn't intend to cache.
I originally thought they were trying to cache account data responses and so wondered why they wouldn't just use unique query parameters in that case. Definitely risky business though.
What I'm wondering is: why would you ever want a CDN configuration to override no-cache instructions from the backend? I assume there's a use case for this, but I can't figure out what it is. Can anyone explain?
Perhaps Atlassian was sitting on cash and needed to make some bets. If you can build a big enough user base for a browser it can earn handsomely from AdWords type referral fees. Look at what Google pays Apple to be default on Safari and how much referral spend Chrome recouped for Google etc. Maybe Atlassian will try and promote Dia to its customer base and look to launch more AI type commercial product discovery experiences like Perplexity Shopping.