Most of those 19861 lines allow it to be an all in one script for multiple activation methods and products. And, if you're still skeptical, then you are free to audit all 19861 lines yourself.
Maybe at the very least educate yourself before acting so smug.
Why would I look at the tutorial for a no script activation when I was, you know, commenting on a point about a script? Did to you forget to educate yourself to notice the difference?
> And, if you're still skeptical, then you are free to audit all 19861 lines yourself.
That's nonsense, of course, how would it help other users? Also, do you expect every single user of the crack to have the capabilities and time to do that?
I highly recommend people take a high performance/race driving course if they can. I did a single day one which involved high speed maneuverability trials designed to be useful in emergency scenarios (swerving, braking, hard turns) followed by a few laps around a racetrack.
It's one of the best ways to figure out what it feels like to drive at the limits of your car and how you and it react in a very safe and controlled environment.
Those 999 other times, the system might work fine for the first 60 miles.
This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.
How you could possibly see this as anything other than FSD being a total failure is beyond me.
>asking a different question: how good would the FSD system be at completing a coast-to-coast trip?
>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.
This really does need to be considered preliminary data based on only one trial.
And so far that's 2.5% as good as you would need to make it one way, one time.
Or 1.25% as good as you need to make it there & back.
People will just have to wait and see how it goes if they do anything to try and bring the average up.
That's about 100:1 odds against getting there & back.
One time.
Don't think I would want to be the second one to try it.
If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)
It does look like lower performance than a first-time driving student.
I really couldn't justify 1000:1 with such "sparse" data, but I do get the idea that these are some non-linear probabilities of making it back in one piece.
It seems like it could easily be 1,000,000:1 and the data would look no different at this point.
> and we should not tolerate self-driving systems that are as good as the worst of us
The person you replied to didn't do that, though:
> But a self-driving system worth its salt should always be alert, scanning the road ahead, able to identify dangerous debris, and react accordingly. So, different pair of shoes...
I think they meant the person you were responding to never claimed that the person they were responding to said that we should tolerate self driving systems that are no better than the worse of us, not that the person that the person you were responding to was responding to never said the thing you very clearly directly quoted.
I think you might have misunderstood someone here. The person you quoted made a generic statement about what we should expect from an autonomous vehicle, but never said (nor implied imho) that the person he responds to didn't expect the same.
> Let users on a free plan take advantage of the latest models through auto
It also describes how the auto selector works in more detail:
> When using auto model selection, VS Code uses a variable model multiplier based on the automatically selected model. If you are a paid user, auto applies a 10% request discount. For example, if auto selects Sonnet 4, it will be counted as 0.9x of a premium request; if auto selects GPT-5-mini, this counts as 0x because the model is included for paid users. You can see which model and model multiplier are used by hovering over the chat response.
While the margin of error is much lower on a freeway due to the speeds, other drivers are generally a lot more predictable (also in part due to the speeds).
There was a good overview on here a while ago about the challenges[1]. You need to plan longer in the future and your sensors need to reach further. It's also a much bigger challenge to collect sensor data as fewer diversions happen per mile (but those that do have higher stakes).
Sure - a good freeway is actually a lot more predictable in most circumstances than city driving, so as a problem to solve it's likely a little bit less complicated. What I wonder about is what it feels like as a passenger. I wonder if it would be more or less frightening than being a passenger when my 17 year old is driving.
I use adaptive cruise control a lot, where I rely on the car for keeping a safe distance.
I have a limited version of SuperCruise which means it operates hands-free on freeways but nowhere else. My wife's Equinox EV has the regular version, which operates on a lot of arterials near us and has more capabilities. The first time that the Equinox signaled, changed lanes to pass, signaled, then changed lanes back was shocking.
We moved to a small town and drive a lot more than we used to and I find that having those capabilities really helps relieve the stress.
I will say that I move to the center lane when going through a notorious set of curves on I-5 in Portland because my Bolt doesn't steer as smoothly as I'd like near the concrete barricades. I wanted SuperCruise because it has a fantastic safety record. There are lots of times it's not available but when it is, I have near-total confidence in it.
I took a Waymo that drove on an 'expressway' which had a speed limit of 40mph and it was definitely a different feeling. I did feel a bit scared, at 25mph it feels like a gentle theme park ride, at 40mph it's beyond that and feels dangerous.
Roads that get used more collect more debris. They also break and require maintenance more often. That maintenance is exceptionally disruptive to the normal operation of the road.
Other drivers aren't your only challenge out there.
> students and community use football. They do not use high school libraries.
That's literally the problem being presented, the "joke" as you called it. This isn't something to be weirdly proud about. This is something to be extremely concerned about. Prioritizing football over education in a place dedicated to the latter is absurd. Like, Idiocracy-levels absurd.
So which is worse?
- funding something people benefit from but is not part of your mission
- funding something which is in your mission, but nobody uses?
The former is questionable. The latter is a just a waste.
Do you care about education outcomes? Or do you care about 90s symbols of education, like libraries.
I think the point is that the average person in a community cares more about football than education. There are place where they really do. You think that is idiotic and absurd -- I don't disagree. But we live in a democracy. You can't force people to care about what you think they should care about.
> They want to seem altruistic but want to also be the only provider.
This is an overly negative take. At the end of the day, they are still providing software and the source code free to use for practically every purpose except directly competing with them.
That's still altruistic while also being sensible in the real world rather than an ideal.
No, the license disallows use of the software for seeing up a multi-user blogging system as a paid service.
You might say, "well wouldn't that be most of what people might want to do with it?" And you might be right, but so what? No one is entitled to build their business on the back of someone else's work, not without their permission anyway.
That certainly makes software like this no longer Free Software. But I'm not religious about it, and maybe that's ok sometimes.
(It also runs afoul of several parts of the OSI Open Source Definition, but maybe that's ok too.)
Not for practically every purpose. It's a blog platform to be used by services that provide blog hosting, just like his own business does, so any use of it would be directly competing with him. From TFA, he never wanted people to actually use it, just to look at the source code.
https://massgrave.dev/manual_hwid_activation / https://massgrave.dev/manual_ohook_activation / https://massgrave.dev/manual_kms38_activation
Most of those 19861 lines allow it to be an all in one script for multiple activation methods and products. And, if you're still skeptical, then you are free to audit all 19861 lines yourself.
Maybe at the very least educate yourself before acting so smug.
reply