Hacker Timesnew | past | comments | ask | show | jobs | submit | femto's commentslogin

That's one of the myths the gambling dens propagate: that they are there for the veterans. There is no technicality about it.

https://www.rslaustralia.org/rsl-sub-branches-and-rsl-clubs-...

The "RSL sub-branch" is a not-for-profit welfare organisation, that looks after veterans. For the most part they are small and if they are lucky they get the use of a meeting room in the RSL club.

The "RSL Club" is a multimillion dollar commercial enterprise that looks after its own interests, conducts political lobbying, makes millions of dollars off gambling addicts and hands out token grants in the community to give the impression that they are there to benefit the community. Typically nothing to do with the RSL sub-branch.


Location: Sydney, Australia

Remote: Yes

Willing to relocate: No

Technologies: WiFi, Deep knowledge of drone radio protocols, LTE, P25, LMR, Electronics, Electrical, Systems Engineering, Programming including real-time embedded, Commercialisation, Manufacturing, DSP, FPGA.

Résumé/CV: Built world's first OFDM WiFi modem. Professional engineer with 35 years experience. Have served as Chief Engineer / CTO for multiple companies.

Email: jd.web@jwdalton.com


With an associated article in the mainstream press:

https://www.smh.com.au/national/nsw/it-s-dangerous-and-that-...


I can't see a problem, as long as the chips are not fraudulently resold. Beyond not using a resource in the first place, reuse is the gold standard in sustainability.

As an engineer, I wouldn't use second hand components for prototyping. When prototyping you need to eliminate as much uncertainty as possible. I'd consider using second hand components in production, provided there is a cost advantage, supply is reliable and my production line includes a test that would pick up faulty components. Even then, I'd be monitoring failure rates and reverting to new components if elevated failure rates caused costs. There's an argument that (well handled) second hand components might even have a lower failure rate than new as they have been burned in.

I'm guessing this company is targeting specialsied repair rather than production. Sometimes complex parts are no longer manufactured and the only option is second hand (often at a premium price).


>I can't see a problem, as long as the chips are not fraudulently resold.

In general, most components are only rated for 2 to 4 re-flow heating cycles before internal damage occurs. On some components the initial re-flow cycle brings the component into the rated tolerance, and for others the PCB forms a bimorph cantilever that physically fatigues the chip contacts/leads.

Production yields are only part of the Infant Mortality Phase of the bathtub curve.

Some components do get more stable with age if and only if left alone, but you can count those on one hand if you still have all your fingers. That is also a 3 hour pedantic conversation no one wants to have.

I am secretly a sentient turnip... =3


That's a fair point: that heating due to repeated (de)soldering can cause degradation.

We want to get the data on that. The more boards we process the better we know the failure rates. Do you have an intuition of what exactly degrades?

My intuition in this area is based on chips having a specification on maximum soldering temperature and duration. I'm not sure to what extent that is cumulative. I gather the vulnerability is the bonding of the gold whisker wires to the pads on the silicon, but you would want to check that.

Apart from the absolute temperature, chips have a recommended heating/cooling cycle, including heating/cooling rates. That suggests that differential expansion is a factor, which would likely be cumulative (more cycles = more likelihood of fatigue and damage).

The above is intuition, not the hard data you want.

I think what you are doing is a great idea (effectively demanufacturing). I'm hoping you can solve the practicalities, which as far as I can see are quality assurance and being able to guarantee a steady supply of components and a price point below new.

Any plans to retape the components so they can be put though a pick-and-place machine, or are you looking more towards manual rework? I can see that there is room for innovation in efficient ways to get components off boards at volume, as most component removal is in the form of manual rework.


...OK, I'll bite, what's the "sentient turnip" bit about?

As a member of the genetic tree, I’ll go out on a limb and suggest Autism. It seems a lot like Autism.

Not really, and most autistic people I've met are very focused individuals. Met one guy whose whole world was the Unreal engine source, and unless you were talking about that specific area... could care less who you were.

Be kind to yourself first, and maybe get outside for a walk. Best regards =3


thats guy sounds very interesting, Unreal engine is fun

Biting is considered bad manners... and don't worry about it. =3

the way we see it is that with robotics and coding agents we can offer much comprehensive and traceable tests. anyone can send any hardware they have, we help understand what is reusable, how to verify it and ship the parts back.

curious—what specific failure modes or uncertainties would you want eliminated before you'd consider using recovered parts, even just in production?


Makes sense if the drawers completely fill the volume of the fridge, so most of the air is inside the drawers and there is minimal air loss when the door opens. If the drawer fronts were insulated, each drawer would effectively be its own chest.

Edit: On a reread, I'm guessing you were talking about individual refrigerated drawers? Multiple drawers in a single insulated box (as I interpreted it) could work though, as it would have less exterior surface area, use less insulation for the same thermal resistance and useable volume and have a single cooling unit, which might be more efficient. It would also fit existing fridge alcoves.


If you designed around it, it would fit where existing kitchens have drawers, and the space typically reserved for a vertical fridge would be occupied by shelving. Kind of a neat idea. Microwave drawers are a thing.

Under-counter refrigerators are also a thing. They're often not cheap, though. KitchenAid has a two-drawer one for around $3,000. But you can find off-brand ones for $700, too. I don't know if the KitchenAid is that much better. There are things to take into account. It's not just as simple as 'put short, 24" deep fridge where drawers go."

They make them already, they just tend to be expensive. Look up Sub-zero.

It would be inconvenient to have to clear the counter each time you want to access the fridge.

I keep my counters largely clear so I can cook, anyway.

Well, but do you never have to open the fridge to get an ingredient, half way through cooking?

Discussed on HN in January 2025: https://qht.co/item?id=42808889


Why can't an index fund compute and track their own objective index, thus ignoring any distortion introduced by the Nasdaq?


They don't target something else because they wouldn't be an index fund, that's just a passive fund with their own published strategy. Those exist but aren't as popular, the appeal of index funds is that you're just getting "the market" and "the market" is measured by the index. Public indexes are supposed to be lower-cost and less manipulable, but that was before they got large enough to "wag the dog," which is the ultimate point of the article.


Because when I buy QQQ expect it to track the Nasdaq-100, not something else.


Vast majority of index funds do not track NASDAQ 100.


This is the detail I'd really like to know more about


The top 3 most popular index fund ETFs track S&P500, which doesn't really pull this kind of shenanigan. Only QQQ tracks the NASDAQ 100 and it's in 5th place by assets under management.

You should probably read a book about index investing if you are going to invest.


Yeah, but the S&P500 is hugely concentrated in MAG7, which are all Nasdaq listed. So when they all get sold to buy SpaceX, you can bet your butt something's gonna happen to a S&P500 ETF.


SPY is somewhat concentrated in mag7 (or the other 93 stocks in QQQ), but only a small percent of mag 7 are owned via QQQ, which has 400B aum. (Mag 7 is 19T.)

The bottom line is all this fuckery is a tiny blip for most investors. It's far more concerning to me the societal harm that will come from further enriching Elon.


Cheaper than Bruce Simpson's US$5000 cruise missile.

https://en.wikipedia.org/wiki/Bruce_Simpson_(blogger)#DIY_Cr...

The $5k cruise missile dates from 2003 and was based on a pulse jet, a bit like a GPS guided V-1.


> If you had a hermetically sealed code base that just happened to coincide line for line with the codebase for GCC, it would still be a copy.

That's not what the law says [1]. If two people happen to independently create the same thing they each have their own copyright.

If it's highly improbable that two works are independent (eg. the gcc code base), the first author would probably go to court claiming copying, but their case would still fail if the second author could show that their work was independent, no matter how improbable.

[1] https://lawhandbook.sa.gov.au/ch11s13.php?lscsa_prod%5Bpage%...


It is true that if two people happen to independently create the same thing, they each have their own copyright.

It is also true that in all the cases that I know about where that has occurred the courts have taken a very, very, very close look at the situation and taken extensive evidence to convince the court that there really wasn't any copying. It was anything but a "get out of jail free" card; it in fact was difficult and expensive, in proportion to the size of the works under question, to prove to the court's satisfaction that the two things really were independent. Moreover, in all the cases I know about, they weren't actually identical, just, really really close.

No rational court could possibly ever come to that conclusion if someone claimed a line-by-line copy of gcc was written by them, they must have independently come up with it. The probably of that is one out of ten to the "doesn't even remotely fit in this universe so forget about it". The bar to overcoming that is simply impossibly high, unlike two songs that happen to have similar harmonies and melodies, given the exponentially more constrained space of "simple song" as compared to a compiler suite.


All of this is moot for the purposes of LLM, because it's almost certain that the LLMs were trained on the code base, and therefore is "tainted". You can't do this with humans either. Clean room design requires separate people for the spec/implementation.


That's the "but their case would still fail if the second author could show that their work was independent, no matter how improbable" part of the post you're responding to.


One out of ten to the power of "forget about it" is not improbable, it's impossible.

I know it's a popular misconception that "impossible" = a strict, statistical, mathematical 0, but if you try to use that in real life it turns out to be pretty useless. It also tends to bother people that there isn't a bright shining line between "possible" and "impossible" like there is between "0 and strictly not 0", but all you can really do is deal with it. Where ever the line is, this is literally millions of orders of magnitude on the wrong side of it. Not a factor of millions, a factor of ten to the millions. It's not possible to "accidentally" duplicate a work of that size.


It sounds to me like you're responding to a different argument than they're actually making and reading intent into it that isn't written into it.


Thank you for providing a reference! I certainly admit that "very similar photographs are not copies" as the reference states. And certainly physical copying qualifies as copying in the sense of copyright. However I still think copying can happen even if you never have access to a copy.

I suppose a different way of stating my position is that some activities that don't look like copying are in fact copying. For instance it would not be required to find a literal copy of the GCC codebase inside of the LLM somehow, in order for the produced work to be a copy. Likewise if I specify that "Harry Potter and the Philosopher's Stone is the text file with hash 165hdm655g7wps576n3mra3880v2yzc5hh5cif1x9mckm2xaf5g4" and then someone else uses a computer to brute force find a hash collision, I suspect this would still be considered a copy.

I think there is a substantial risk that the automatic translation done in this case is, at least in part, copying in the above sense.


I fully agree with you. (A small information theory nit pick with your example. The hash and program would have to be at least as long as a perfectly compressed copy of Harry Potter and the Philosopher's Stone. If not you've just invented a better compressor and are in the running for a Hutter Prize[1]! A hash and "decomporessor" of the required length would likely be considered to embody the work.)

It's an interesting case. As I understand it, there is an ongoing debate within the AI research community as to whether neural nets are encoding verbatim blocks of information or creating a model which captures the "essence" or "ideas" behind a work. If they are capturing ideas, which are not copyrightable, it would suggest that LLMs can be used to "launder" copyright. In this case, I get the feeling that, for legal clarity, we would both say that the work in question (or works derived from it) should not be part of the training set or prompt, emulating a clean room implementation by a human. (Is that a fair comment?)

I've no direct experience here, but I would come down on the side of "LLMs are encoding (copyrightable) verbatim text", because others are reporting that LLMs do regurgitate word-for-word chunks of text. Is this always the case though? Do different AI architectures, or models that are less well fitted, encode ideas rather than quotes?

[1] https://en.wikipedia.org/wiki/Hutter_Prize

Edit: It would be an interesting experiment to use two LLMs to emulate a clean room implementation. The first is instructed to "produce a description of this program". The second, having never seen the program, in its prompt or training set, would be prompted to "produce a program based on this description". A human could vet the description produced by the first LLM for cleanliness. Surely someone has tried this, though it might be a challenge to get an LLM that is guaranteed not to have been exposed to a particular code base or its derivatives?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: