Hacker Timesnew | past | comments | ask | show | jobs | submit | igreulich's commentslogin

I buy the best camera an iPhone will give me. If that same camera in the Pro Max was in the Mini I would have bough that.

The iPhone 5 was the perfect size, and I miss that form factor. But The camera is what sells me the device.


Calling it a compiler is not great IMO, but it is a js bundler/transpiler.

You set it up, and write your js code, and parcel spits out the final, ready to embed in your html file, js and css assets.

That's oversimplified, but the basics all the same.


Aren't swans just geese in nicer clothes?


May depend on the species. Mute swans are usually okay except for the occasional one that's an a-hole and is usually well known locally. I've heard that do some of the other species attack if approached.


I would venture to guess that often it is because the answer is 'business reasons', which is often not reasonable.


that's a flippant guess. the underlying reasons or the reasoning for others' decisions may not be apparent to you (i.e., under-explained).

beyond that, if decisions others make often seem not reasonable, it's probable that you disagree with the values on which those decisions are based, rather than those business decisions being without reason. you may be entirely justified in your disagreement, but that's a different animal from unreasonableness.

also, most business decisions are made under uncertainty and with imperfect information (under-informed), and many can seem less reasonable in hindsight as a result.

in any case, it's really unlikely that decision makers are chaos monkeys even if it seems that way from your vantage point.


For websites, as opposed to webapps, this might be fine. But a webapp will pretty nearly alawys want to look the same cross platform, sort of forcing styling.

I get it, they didn't like what they looked like before, because reasons. But I wonder what percentage of sites and apps were using unstyled controls.


Honestly, it read like they stopped because it was harder then they wanted to pay for. Or at least the next step would have been.


Being better than [x] does not make Slack good, per se. It might make some thankful they aren't using [x], but that's really about as far as that should go.

I like Slack more than Skype, but I hate Slack with the passion of 1000 firey suns, especially in regard to many UIUX decisions. (Why does the new message line stay on the view AFTER I HAVE RESPONDED?) Not to mention the un...helpfulness (I guess) of their support.

Slack was great when it was new, because it was better in someway than everything else out there. Mind you not the same way for everything, but in some way it was better than most, if not all other options avaliable at the time.

That is not the case now. There are other options that are as good, or better. Slack MUST have been aiming for that 'fuck-you' size the entire time, because once they hit critical mass, they seemingly immediately stopped trying to be better, and started trying to be the one you were already paying; a serious downgrade in my opinion.


So coding in one language is a BIT of a red herring.

It only works if the paradigms your app uses are the same in the front and back. And I have nott seen a project where the KIND of problems that need solving (in the front vs back) were close enough for the same language to be a benefit.

I mean at the end of the day, I can build a house with JUST a screwdriver, but man, I'd rather use the right tool fot the specific job.

And at the end of the day, $LANG is a tool.


My company uses Node with Typescript on the front and backend; we’ve strongly typed our APIs and thanks to the excellent io-ts library we’ve also automated the marshaling and unmarshaling data as it crosses the network boundary so that we can continue to use the strongly typed data, and also are constrained by the types to only call our APIs in valid ways.

Some subset of that can be achieved with stuff like GraphQL or Swagger and what not though I haven’t given it a serious try since we’ve never run into hiccups yet with this system, reliant on a very simple library.

We also use the same package manager (NPM) on both ends and can therefore invoke scripts all over our codebase with the same syntax; and although react code and node code is quite different in structure, they all have the same idioms and async syntax and stuff, so the amount of retraining necessary to convert someone from backend to full stack isn’t very large due to the same environment, and keeping code style consistent across the project is also easy.

So I feel like there’s definitely stuff to profit off of with sharing a single language.


Reminds me of yesterday's hn frontpager, the paper about a "programble programming language". they were talking about how domain specific languages within a single language become super distinct and ungrokable to people who are otherwise fluent in the parent language. Think like, an angular dev reading nest.js node backend stuff. With javascript this is sorta everywhere. JS DSLs arent like explicitly uninterpretable to otherwise-experienced JS devs but there is a context shift cost for sure.

Ive not worked with c# much, but is that not as much of a problem with it? Like, is there more of a standardized way of doing things? Python is kinda like that i guess.


I don't see where it is canceled. The closest thing I see to canceled, is postponed.

From the update: 'We will not activate user level product usage tracking on GitLab.com or GitLab self-managed before we address the feedback and re-evaluate our plan.'

That leaves a lot of wiggle room.


"Further, GitLab will commit to not implementing telemetry in our products that sends usage data to a third-party product analytics service."

That seems like a pretty solid indication that the plans are cancelled.


That sounds like they are going to roll out a first-party service, which is better, but not great for the self-hosted deployments.


Telemetry still sucks. I don't want it, and it should never be opt-out.


You opt in to first party telemetry by using gitlab. It is impossible for you not to send data to gitlab when using gitlab.

Self-host it if you don't want it. I dunno what to tell you; at some point, the company does have to observe how people use their product, and they'll do so a lot more effectively by looking at how most people are using it, rather than … idk, send a survey or something. Not that they won't do the latter anyway, nothing prevents them from doing that, but it's a very different type of data.

I'm a privacy nut by the way, and nothing in that field pisses me off more than people who vocally shit on telemetry. "I hate you, you should just GUESS what I want rather than do real work to figure it out" sort of thing.

What is it about telemetry you don't like, exactly? And I do say "telemetry" in general, because you're saying it sucks in general. So no specific examples like Windows 10's abhorrently overreaching telemetry, privacy invasions that look at PII, etc.

Telemetry generally is things like "97% of users have visited the issue tracker. 66% of projects with an issue tracker enabled have at least 1 issue. new issue rate on public repositories climbs by 15% if the new issue button is orange instead of green. users spend 30% more time on the new issue page if there's a new issue template. issues with a template have a commit/mr associated with them at a 8% higher rate than issues with empty templates".

By choosing to die on this hill, you're taking both good-will and attention away from much more severe issues of telemetry abuse, such as "let's collect the precise geoloc of all our users in our gay dating app at 5 minute intervals, store it for 3 years and not care one ounce about security".


Self-hosting was going to have telemetry, which is simply a dealbreaker for many companies.

And it may still have it. I just don't trust GitLab's management anymore.


> You opt in to first party telemetry by using gitlab. It is impossible for you not to send data to gitlab when using gitlab.

There is still a difference between sending actions you selected to the server and tracking where you move the mouse while on the page in your browser or other bs like that. One is required to implement the functionality, the other is not.


Telemetry doesn't necessarily mean tracking mouse movements.


Telemetry is necessary to be able to observe the system, and look for adverse impact. You should be more thoughtful to the people supporting the tools you use, because without telemetry they do a bad job keeping it working for you.


If anything, the deterioration of quality in most modern software is a proof that telemetry makes people do bad job at keeping software working for its users.


Saying "the deterioration of quality in most modern software" is such a cop-out. There's no universal agreement upon any "general" deterioration, and I'm not sure you're keeping track of the "deteriorating" software that has telemetry vs. the one that doesn't. I personally find that a lot of software I use daily does improve over time, especially web software.

You want a counter-example? Reddit has very little telemetry and quite famously barely looks at the data it does gather. You want to talk about deterioration, how's that for some severe rot.


> Saying "the deterioration of quality in most modern software" is such a cop-out.

Fair. It's just my opinion. Though I'm not the only one expressing it. You've probably heard the phrase "optimizing for lowest common denominator", or as 'dredmorbius calls it, "the tyranny of the minimum viable user".

> I personally find that a lot of software I use daily does improve over time, especially web software.

I find the reverse. GMail and Dropbox being prominent examples.

> Reddit has very little telemetry and quite famously barely looks at the data it does gather.

Huh. That's not what I expected. I see Reddit as poster child of making the UX worse and worse, driven by advertising goals - something that generally does correlate strongly with running telemetry. I'm confused about them now.


> GMail and Dropbox being prominent examples.

Dropbox I'd agree with, gmail I actually much prefer the current UI to the old one.

And indeed web services do tend to optimize for the "lowest common denominator", or more generally for the "majority of users". Which does tend to fuck over power-users. But it also means for most people telemetry works out.


I disagree that this works out well, because - perhaps unlike the data-driven companies - I don't believe the measure of a good program is the number of registered users. I believe it's just a half of the equation, and the actual equation looks more like (number of users * average utility for user)[0]. Whenever you dumb down your application by removing useful features or sacrificing ergonomics for looks, you're trading average utility for adoption. The software is more appealing to more people, but less useful to them[1].

It does fuck over power-users, but it also fucks over regular users. Not only doing tasks takes longer than it could (or than it took in previous generations of equivalent software), it often precludes them from becoming power users. Because a "power user" of a specific suite of software is something a person becomes over time and repeated exposure. Which includes essentially everyone doing a full-time job in front of computers. I believe dumbed down software is causing a huge hidden economic loss in reduced efficiency of office workers. Not to mention their misery.

(A good example here would be POS systems. If you've ever seen a DOS based one, you'll know it's an order of magnitude more efficient to use than the current breed of browser-based ones. The old-school UI was clean, ergonomic, consistent and fully keyboard-operated, allowing to do most tasks without even looking at the screen for most of the time. There was a relevant thread on HN recently[2].)

--

[0] - actually, I think it's more like: $$ \sum_{user \in users} utility_{user} $$ (https://latex.codecogs.com/gif.latex?%5Csum_%7Buser%20%5Cin%...).

[1] - by "useful" I mean, what tasks it lets users accomplish and how efficiently.

[2] - https://qht.co/item?id=21045935


No, it is not. We have been selling software for decades without telemetry and it worked just fine.

I am more than willing to help GitLab, but telemetry in a VCS is simply a red flag (even a legal impediment in many cases).


No 1/2 decent CEO, PR, Legal or any other department at work here would leave themselves without wiggle room.

I’ll reserve the pitchforks for if this comes up again.


Yup, and it's not that they plan to backtrack, it's to reserve room for not committing to an exact outcome, but instead a general outcome. What they plan to do is not necessarily what will happen exactly, as anybody that has been in a position of authority or part of a project knows. Things may take a little longer, there may be some detours, etc. It's insurance so somebody doesn't armchair nitpick and shame them.


I think there's compelling evidence that there is not a "1/2 decent" CFO at work there...

I'm gonna keep my pitchfork sharp, close, and on display here.


To the extant that the local news knows anything, it is being reported here that the water will be returned to circulation (for lack of a better word) via the sewer, and its treatment cycle.


As supposed to... were they anticipating Google would be breaking down the water into component hydrogen and oxygen?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: