Hacker Timesnew | past | comments | ask | show | jobs | submit | cosgrove's commentslogin

Can they really do it? Tesla is making steady progress and has reached a few new milestones recently.

They recently launched their Robotaxi service in Austin, and it seems to be as good as Waymo or better. https://youtu.be/RcaBZenrCCs

They also recently autonomously delivered a car to a customer’s apartment straight from the factory line. https://youtu.be/lRRtW16GalE


Still only classified as Level 2 autonomy --- which requires constant supervision. This is according to Tesla itself.

https://www.synopsys.com/blogs/chip-design/autonomous-drivin...


Tesla also backs that up with actions since all of the robotaxis are supervised by a dude in the front seat that can abort the drive if the level 2 self-driving goes bonkers.


Supervision wasn't part of the original fantasy/sales pitch and calls into question their stated value proposition for competing with Uber and Waymo.


I feel like his follow up tweet was good context: https://x.com/pariljain/status/1790500423327191169?s=46

>Had a great productive chat with @elonmusk before leaving, would’ve stuck around longer if I didn’t have the itch to chase a specific vision.

>Don’t see any capacity eroding on his front like the article mentions


Is he referring to the electrek article?


This is nice to hear about. Can you tell me more about how your live results matched or diverged from your backtesting?

Did you list the returns of the commodities as a comparison, or are you trading those futures as well in the mix? (I know you only talked about ES/MES)


I've studied many systems over the years and never found any that matched or outperformed their backtests. So far our live results have hit between 1/4 and 3/4 of backtest performance depending on the model. Needless to say the high inflation and high interest rate market climate over the last two years hasn't been seen in the rest of the backtest period, but conditions are starting to normalize now.

Nevertheless, it would be prudent to expect any algorithmic trading model to underperform its backtest going forward, but there's enough leeway in the CAGR and max drawdown figures to underperform the backtest and still produce substantial alpha, especially for the more advanced models.

Right now the models are specialized to trade equities. I may develop new models that trade commodities in the future though.


>but I’m more so pushing back on the added cost and complexity of human elements NOT required by the built environment

Am I understanding you correctly that you would want something that still had cameras at head height, but simply didn't have the head form factor? And perhaps had 4 arms for some tasks that it would benefit from instead of being limited to two arms?

If so, how many environments are you going to build robots for? And how does your total overall build cost increase with each different model you build?

Doesn't matter if it's simper in the end-design if getting there ends up costing you as much as it would to build a general purpose design in the first place.


Knowing starting battery temperature would go a long way to interpreting the results here. You could assume they all started the same, but it would be nice to know for sure.

Nowhere do they describe initial starting conditions for the batteries / vehicles besides saying 10% SOC at start.


I was surprised at how similar the new 15 Pro looks and feels to the 11 Pro I used to carry. I also miss the gold color way of the 13 Pro.


r/WallStreetBets folks do this type of thing all the time... wouldn't be surprised if it was someone gambling.


But if you're a paid subscriber you can access GPT4 through ChatGPT's interface. Are you saying there's a difference between using the GPT4 model alone vs. the GPT4 model in the ChatGPT interface? If so, could you please clarify with an example?


That's right. I just went to look and it looks like ChatGPT has been updated to the May version. But in terms of previous experience: ChatGPT's code is almost unusable, it always ignores the most basic logic, and I only use it to write short scripts. But GPT-4, even if it contains multiple files referencing each other, it gives code that I don't even need to do Code review.


I have had the opposite experience. Many domains where it can’t produce correct code (counting BitSet, WatchOS apps), or doesn’t even begin to understand (distributed consensus).

When it works it is great. When it doesn’t gap can be large.

When you ask it to correct itself it will make new mistakes and keep alternating between different mistakes and omissions.

I haven’t tried it yet on a real codebase like say a database. Is it even possible to give it that context?


My understanding is that the author was talking about using GPT-4 via the ChatGPT interface. That's where the 25 messages every 3 hours limit comes from.

When they compare it to regular ChatGPT they are comparing GPT-4 to GPT-3.5 turbo.


I think it's neat that they did all these studies on feasibility and projected a start date into the 2000s.

Also neat that we ended up with complimentary energy generation and storage technologies to fix the "it gets dark at night" problem of solar electricity generation!


No doubt, they are growing fast! And naturally they would make more than Tesla, since BYD makes more than just BEVs.

BYD's numbers often contain plugin hybrids (PHEV). So if you're trying to compare apples/apples, be sure to look at pure-battery EVs between the two.

Note 'electrified' from a different article from Barrons: "BYD delivered 206,089 electrified passenger vehicles in March [2023], up about 98% from the 104,338 delivered in March 2022. The March 2023 figures include 102,670 all battery electric vehicles and 103,419 plug-in hybrid models."


The article makes clear that BYD will surpass Tesla in BEV sales this year, not including hybrids.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: