Hacker Timesnew | past | comments | ask | show | jobs | submit | seberino's commentslogin

That may be true but I think the unspoken assumption in your comment is that somehow, without capitalism, greed magically melts away. How do you explain the constant extreme rampant corruption in communist and socialist countries over 100 years if not from GREED?


I know that it doesn't. Greed will be ever-present, yes, but that doesn't mean that it's a one-way ratchet. It's something we have to keep fighting against all the time. Greed starts out as a driver of progress, then eventually becomes an impediment to progress. The other constant there is progress! No dam will block a river forever.


The definition of capitalism is the private ownership of production and its use to generate profits.

I think a coerced assumption you may have of capitalism is that corruption is an unintended side effect, but it actually follows from its principles.

How is a society to maintain unmarred democratic institutions when its elements are fundamentally unequal? Put more clearly: How can people have the same amount of political power when one class (capitalist) OWNS the production of what the other class (workers) need?

The mythology of capitalist society paints both them as equals and the state as neutral. This is a tactic to preserve the appearance of a democratic backbone. They afford this mythology because capitalists own the air waves and they have, and can have, the most influence in the state. In fact, due to this fundamental inequality capitalists are, for all practical purposes, capitalists are the state.

Capitalist societies put political power up for auction; Corruption has its highest manifestation within capitalist societies.

Now to your point. Greed will never "magically melt away". Greed can only be controlled through democratic control of what permits greed in the first place.

Communism/socialism isn't about magically doing or undoing anything, it's the science of creating firm and unalienable working class power. It must start with democratic control of production and local peoples councils. Greed will not magically melt away, greed must be constantly cut out by everyone by everyone HAVING the political power to cut it out. This means peoples councils will be convened at the neighborhood level, peoples courts will be manned, not by professional judges, but by rotating locally elected citizens. Council delegates will be bound by law to only, and exclusively, be messengers at higher level councils, etc. This is just a small picture of what democracy is. It is not me to say specifically how, of course, but communism does not involve blindly and powerlessly trusting political candidates, like capitalist society requires.

There is a reason communism is demonized by the people who control our society.


Not sure that is warranted. AI will create exciting changes to society for the better. These times are uncertain but certainly not depressing.


I don't doubt this, however, the question is if AI will do this in our life-time. The industrialization has led to prosperity in the long term, but initially it led primarily to the proletarianization of the people. Are you willing to accept a devaluation of your skills and a decline in your prosperity so that in 50 to 100 years there is a chance that AI will lead to a better future?


No one is going to ask if you're willing to accept this - it's simply going to happen whether we like it or not.


Some people will answer without being asked. The most we will get out of that is that the word "saboteur" will get a more modern synonym (not sure what it will be, but the inventor of cheap EMP granades will have the biggest say in that). The future will, of course, steamroll over such answers, as it always did, but we'll all feel the bumps on the way.


I don't think with any confidence we can say it will be for the better. Or at least, not on balance for the better.


AI companies are not even pretending they will improve society.

They are promising CEOs they can eliminate their workforce to increase profits. For people working for a wage it’s all downside, no upside.


Uncertainty is frequently a contributor to depression. Uncertainty is one of the most reliable stress triggers, which, over prolonged periods of time, especially when paired with low perceived control, is a direct path to increased depression. So if something is uncertain, it is often depressing as well.


I think we can assume it will create disruption, but by definition this is both positive and negative for different individuals & dimensions, and it is small solace if society improves while your life languishes or declines - this is just what's happened to a generation of young males in the US and is having huge repercussions. I think you're right to suggest the goal is to avoid letting the uncertainity make you depressed, but that does not automatically make it so of everyone.


It’s positive if you are already wealthy, negative if you have to work for a living.


Is that AI generated by any chance? Seems like an AI crystal ball that you're looking into.

It's fine to have that opinion, but please frame as an opinion or else give me the lotto numbers for next week if you can predict the future that accurately.


"AI will create exciting changes to society for the better"

Why are you certain of this?


Prove your assertion.


it is depressing to me for exactly the same reason


Thanks. Amazing work.


Some people also love riding horses, making their own clothes and hunting for their own food. None of that changes the march of technology for the majority. The plain fact is there are too many benefits to self driving cars like price and safety.


Imagine comparing driving a car to riding horses, lmao!


The eventual goal is to use it at extremely high altitude with drones so that that isn't much of a risk.


The laser would ricochet from disintegrating drones (think of the props) and cause collateral damage to people on the ground.

Also see this (with remaining eye): https://www.funraniumlabs.com/2024/07/how-i-got-my-laser-eye...


There's danger and risk in everything worth doing.

The dams could burst and drown millions.

The highrise you live in could topple over or burn.

The bridge carrying the train could crash into the valley.

The passenger plane could lose lift and crash into a residential building.

The power source in your car could catch fire.

We engineer around it.


Counterpoint:

We've known about global warming for almost 50 years and yet to this day the debate is divided between "eh" and "no, it isn't".

Coal plants have been consistently depositing smog in our lungs with no end in sight.

Boeing has successfully lobbied itself out of criminal charges for the deadly consequences of their (IMHO) negligence regarding the 737 Max.

Companies have stopped pretending that they'll put human lives before profit. As a human, I am therefore beyond giving any company the benefit of the doubt that they'll "engineer around it" when it comes to safety.


> We've known about global warming for almost 50 years and yet to this day the debate is divided between "eh" and "no, it isn't".

So much work is being done on that front.

> Coal plants have been consistently depositing smog in our lungs with no end in sight.

Not the #1 killer of humans. We're still making lots of progress on healthspan, especially in pulling people out of poverty. We're well ahead of where we were 50 years ago. People are no longer starving to death at unprecedented scale.

> Boeing has successfully lobbied itself out of criminal charges for the deadly consequences of their (IMHO) negligence regarding the 737 Max.

Sometimes bad things happen. Not to diminish these lives, but this is just a footnote in the list of impressive things technology and society have accomplished, though. Boeing and execs should face punishment, but this is a very small downside to a much greater set of accomplishments that vastly outweighs the bad.

> Companies have stopped pretending that they'll put human lives before profit. As a human, I am therefore beyond giving any company the benefit of the doubt that they'll "engineer around it" when it comes to safety.

What are you talking about? To just cite one instance, Waymo is already going to be one of the biggest needle movers in terms of human lives saved. Or another - look at what Moderna did during Covid.

Life is better than it used to be, not worse. You're wearing miasma-tinted glasses.


> So much work is being done on that front.

Yes, it is, but not nearly enough and it had to be done by dragging corporations kicking, lying and screaming every step of the way. Corporations, mind you, powerful enough to drive several of the wars of the last 50 years.

> We're still making lots of progress on healthspan

Yes, as long as your health insurance covers the treatment. UnitedHealth investors are suing the company for not willing to follow the "aggressive, anti-consumer tactics" that got their CEO murdered [1].

I've also personally heard a mildly drunk executive of a Fortune 500 company brag about how, on a cancer-related lawsuit, the US government would never let them go bankrupt because that would dry up the taxes they would collect otherwise. Time proved him right.

> Boeing and execs should face punishment, but...

They should, but they won't. And they are not the exception.

> Waymo is already going to be one of the biggest needle movers in terms of human lives saved (...) look at what Moderna did during Covid.

Uber made ignoring the laws at a worldwide scale their whole business model. And how has the market rewarded Moderna for their to humanity? Their stock is at its lowest since 2020.

People have been making the comparison to the robber barons recently, but I think they miss the point where the robber barons didn't have a worldwide surveillance apparatus and the kind of propaganda power that Orwell could only dream of.

I agree that technology has advanced for good in a lot of areas, but I also think it's worth noticing that many of these advances are behind gatekeepers who will burn a forest to the ground before upgrading infrastructure that's one hundred years old to keep shareholders happy.

[1] https://futurism.com/neoscope/unitedhealthcare-investors-wil...


Counterpoint: Military doesn't care or get prosecuted by anyone other than the military. Pew pew, motherfucker.

Defense companies are rather refreshing to read for corporate postings, since they see to always be proud of how many Iraqi's they bulldozed to death.


Yes, but this [beaming large amounts of power wirelessly over long distances] isn't worth doing.


Also, an implicit but obvious goal is military applications. Murder laser beams are a potential benefit in those cases.


That will be quite a sight -- seeing the videos of people on the ground, filming future drone laser battles


I think I see what you mean. I suppose it is kinda like an opaque binary, nevertheless, you can use it freely since all is under the MIT license right?


Yes even for commercial purposes which is great, but the point of and reason why "open source" became popular is that you can modify the underlying source code of the binary which you can then recompile with your modifications included (as well as selling/publishing your modifications). You can't do that with deepseek or most other LLMs that claim to be open source. The point isn't that this makes it bad, the point is we shouldn't call it open source because we shouldn't loose focus on the goal of a truly open source (or free software) LLM on the same level than chatgpt/o1.


You can modify the weights which is exactly what they do when training initially. You do not even need to do it in exactly the same fashion. You could change things such as the optimizer and it would still work. So in my opinion it is nothing like an opaque binary. It's just data.


We have the weights and the code for inference, in the analogy this is an executable binary. We are missing the code and data for training, that's the "source code".


> that's the "source code"

Then it’s never distributable and any definition of open source requiring it to be is DOA. It’s interesting, as an argument against copyright. But that academic.


it's not academic. Why can't ChatGPT tell me how to make meth? why doesn't deepseek want to talk about tiananmen square? what other things has the model been molested into how it should be? without the full source, we don't know


While I appreciate the argument that the term "open source" is problematic in the context of AI models, I think saying the training data is the "source code" is even worse, because it broadens the definition to be almost meaningless. We never considered data to be source code and realistically for 99.9999% of users the training data is not the preferred way of modifying the model, just because the don't have millions of $ to retrain the full model, they likely don't even have the HDD space to save the training data.

Also I would say arguing that the model weights are just the "binary" is disingenuous, because nobody wants releases that only contain the training data and scripts to train and not the model weights (which would be perfectly fine for open source software if we argue that the weights are just the binaries), because they would be useless to almost everyone, because they don't have the resources to train the model.


I agree it would be nice to have the training specifics. Nevertheless everything DeepSeek released is under the MIT license right? So you can go set up a cloud LLM, fine tune it, and, do whatever else you wish with it right? That is pretty significant no?


It is, but words mean things. If I said I got you a puppy and gave you a million dollars instead, that'd be nice, but what about the puppy?


Wait timeout. I thought DeepSeek's stuff was all MIT licensed too no? What limitations are you thinking of that DeepSeek still has?


I am referring to this one: https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/LIC...

It is a bit more permissive than Llama's it seems (no MAU threshold it seems).


Wow. Your link is frustrating because I thought everything was under the MIT license. Why did people claim it is MIT licensed if they sneaked in this additional license?


So, the older DeepSeek-V3 model weights are sadly not permissively licensed.

But the recent DeepSeek-R1-Zero and DeepSeek-R1 have MIT licensed weights.


Thank you very much. That was helpful. Do we need the older model weights to use the recent DeepSeek-R1-Zero and DeepSeek-R1 models?


I can't be 100% certain, but I think the good news is: no. There seem to be the exact same number of safetensor files for both, and AFAICT the file sizes are identical.

https://huggingface.co/deepseek-ai/DeepSeek-V3/tree/main https://huggingface.co/deepseek-ai/DeepSeek-R1/tree/main


I'm not an expert but didn't they release the weights under MIT license? So you can make your own LLM with complete control right?

I agree it would nice to know the details of their training, but, simply calling this drop an "opaque binary" is seriously underselling it no?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: