> Google will begin to verify the identities of developers distributing their apps on Android devices, not just those who distribute via the Play Store
This is absolutely unacceptable. That's like you having to submit your personal details to Microsoft in order to just run a program on Windows. Absolutely nuts and it will not go as they think it will.
> At the end of it, they were sketching a completely different architecture without my "PMing". Because they finally understood who was actually using our product.
I cannot help but read this whole experience as: “We forced an engineer to take sales calls and we found out that the issue was that our PMs are doing a terrible job communicating between customer and engineering, and our DevOps engineer is more capable/actionable at turning customer needs into working solutions.”
He wants educators to instead teach “how do you think and how do you decompose problems”
Ahmen! I attend this same church.
My favorite professor in engineering school always gave open book tests.
In the real world of work, everyone has full access to all the available data and information.
Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.
Doing this is called "engineering". And this is what this professor taught.
Meaning to use your device you need to have a contractual relationship with a foreign (unless you are in the US) third party that decides what you can or cannot do with it. Plus using GrapheneOS is less of an option every day, since banks and other "regulated" sectors use Google Play Protect and similar DRMs to prevent you from connecting from whatever device you want. Client-side "trust" means the provider owning the device, not the user.
Android shouldn't be considered Open Source anymore, since source code is published in batches and only part of the system is open, with more and more apps going behind the Google ecosystem itself.
Maybe it's time for a third large phone OS, whether it comes from China getting fed up with the US and Google's shenanigans (Huawei has HarmonyOS but it's not open) or some "GNU/Linux" touch version that has a serious ecosystem. Especially when more and more apps and services are "mobile-first" or "mobile-only" like banking.
The funny thing is Stallman started his fight like half a century ago and on regular days Hacker News shits on him eating something off of his foot and not being polished and diplomatic, and loves practical aspects of Corporate Open Source and gratis goodies and doesn't particularly care about Free Software.
On this day suddenly folks come out of the woodwork advocating for half baked measures to achieve what Stallman portrayed but they still hardly recognize this was EXACTLY his concern when he started the Free Software movement.
If this is a thing then the solution they offer is incorrect. A big giant red screen: “warning the identity of this application developer has not been verified and this could be an application stealing your data, etc” would have worked.
What they want is to get rid of apps like YouTube Vanced that are making them lose money (and other Play Store apps)
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.
He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.
We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.
When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.
But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.
> importers must declare the exact amount of steel, copper, and aluminum in products, with a 100% tariff applied to these materials. This makes little sense—PCBs, for instance, contain copper traces, but the quantity is nearly impossible to estimate.
Wow this administration is f**ing batshit insane. I thought the tariffs would be on raw metals, not anything at all that happens to contain them.
• Access to your private data—one of the most common purposes of tools in the first place!
• Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM
• The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker.
If I may add my view as a formerly high-achieving semiconductor worker that Intel would benefit greatly from having right now, a lot of us pivoted to software and machine learning to earn more money. My first 2 years as a software engineer earned me more RSUs than a decade in semiconductors. Semiconductors is not prestigious work in the U.S., despite the strategic importance. By contrast, it is highly respected and relatively well remunerated in the countries doing well in it.
From this lens, the silver lining of the software layoffs going on may be to stem the bleeding of semiconductor workers to the field. If Intel were really smart, they’d be hiring more right now the people they couldn’t get or retain 3-5 years ago
Have we not learned, yet? The number of points this submission has already earned says we have not.
People, do not trust security advisors who tell you to do such things, especially ones who also remove the original instructions entirely and replace them with instructions to run their tools instead.
The original security advisory is at https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7... and at no point does it tell you to run the compromised programs in order to determine whether they are compromised versions. Or to run semgrep for that matter.
There will be a a new kind of job for software engineers, sort of like a cross between working with legacy code and toxic site cleanup.
Like back in the day being brought in to “just fix” a amalgam of FoxPro-, Excel-, and Access-based ERP that “mostly works” and only “occasionally corrupts all our data” that ambitious sales people put together over last 5 years.
But worse - because “ambitious sales people” will no longer be constrained by sandboxes of Excel or Access - they will ship multi-cloud edge-deployed kubernetes micro-services wired with Kafka, and it will be harder to find someone to talk to understand what they were trying to do at the time.
"You may also need to upload official government ID."
This won't end well for Google or the governments involved when the people get so angry that they are forced to roll this back. Switch to an alternative phone OS.
Personally...we all know the Play Store is chock full of malicious garbage, so the verification requirements there don't do jack to protect users. The way I see it, this is nothing but a power grab, a way for Google to kill apps like Revanced for good. They'll just find some bullshit reason to suspend your developer account if you do something they don't like.
Every time I hear mentions of "safety" from the folks at Google, I'm reminded that there's a hidden Internet permission on Android that can neuter 95% of malicious apps. But it's hidden, apparently because keeping users from using it to block ads on apps is of greater concern to Google than keeping people safe.
> we will be confirming who the developer is, not reviewing the content of their app or where it came from
This is such an odd statement. I mean, surely they have to be willing to review the contents of apps at some point (if only to suspend the accounts of developers who are actually producing malware), or else this whole affair does nothing but introduce friction.
TFA had me believing that bypassing the restriction might've been possible by disabling Play Protect, but that doesn't seem to be the case since there aren't any mentions of it in the official info we've been given.
On the flip side, that's one less platform I care about supporting with my projects. We're down to just Linux and Windows if you're not willing to sell your soul (no, I will not be making a Google account) just for the right to develop for a certain platform.
"Balking at the $50+ charge for turnkey assembly, I opted to take the financially responsible route and pay $200+ for a hot-air rework station to solder it myself."
I genuinely do not understand where how the idea of building a total surveillance police state, where all speech is monitored, can even as much as seriously be considered by an allegedly pro-democracy, pro-human rights government, much less make it into law.
Also:
Step 1: Build mass surveillance to prevent the 'bad guys' from coming into political power (its ok, we're the good guys).
Step 2: Your political opponents capitalize on your genuinely horrific overreach, and legitimize themselves in the eyes of the public as fighting against tyranny (unfortunately for you they do have a point). They promise to dismantle the system if coming to power.
Step 3: They get elected.
Step 4: They don't dismantle the system, now the people you planned to use the system against are using it against you.
I’ll be honest: there is a very good chance this won’t work .... At the same time, the China concerns are real, Intel Foundry needs a guarantee of existence to even court customers, and there really is no coming back from an exit. There won’t be a startup to fill Intel’s place. The U.S. will be completely dependent on foreign companies for the most important products on earth, and while everything may seem fine for the next five, ten, or even fifteen years, the seeds of that failure will eventually sprout, just like those 2007 seeds sprouted for Intel over the last couple of years. The only difference is that the repercussions of this failure will be catastrophic not for the U.S.’s leading semiconductor company, but for the U.S. itself.
Very well argued. It's such a stunning dereliction the US let things get to this point. We were doing the "pivot to Asia" over a decade ago but no one thought to find TSMC on a map and ask whether Intel was driving itself into the dirt? "For want of a nail the kingdom was lost" but in this case the nail is like your entire metallurgical industry outsourced to the territory you plan on fighting over.
Ironic, Western politicians thought opening up to trade with China would lead to it adopting a Western model of government. Instead it's lead to the USA adopting the Chinese one.
This is really bad. I think that most people on HN will agree with that.
The problem is that most normal people (HN is not normal - mostly for the better) don't even understand what sideloading is - let alone actually care.
How can we fix this?
(aside from making people care - apathy enables so many political problems in the current age, but it's such a huge problem that this definitely isn't going to be the impetus to fix it)
I've been using Go more or less in every full-time job I've had since pre-1.0. It's simple for people on the team to pick up the basics, it generally chugs along (I'm rarely worried about updating to latest version of Go), it has most useful things built in, it compiles fast. Concurrency is tricky but if you spend some time with it, it's nice to express data flow in Go. The type system is most of the time very convenient, if sometimes a bit verbose. Just all-around a trusty tool in the belt.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
In general I would rather the government take a stake in corporations they're bailing out. I think the "too big to fail" bailouts in the past should have come with more of a cost for the business, so on one hand I'm glad this is finally happening.
On the other hand, I wish it were a more formalized process rather than this politicized "our president made a deal to save america!" / "Intel is back and the government is investing BUY INTEL SHARES" media event. These things should follow a strict set of rules and processes so investors and companies know what to expect. These kind of deals should be boring, not a media event.
I live in one of the areas they are actively testing/training in. Their cars consistently behave better and more safely than most human drivers that I’m forced to share the road with.
As semi-autonomous and autonomous cars become the norm, I would adore to see obtaining a drivers license ratchet up in difficulty in order to remove dangerous human drivers from the road.
These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.
What I like about this post is that it highlights something a lot of devs gloss over: the coding part of game development was never really the bottleneck. A solo developer can crank out mechanics pretty quickly, with or without AI. The real grind is in all the invisible layers on top; balancing the loop, tuning difficulty, creating assets that don’t look uncanny, and building enough polish to hold someone’s attention for more than 5 minutes.
That’s why we’re not suddenly drowning in brilliant Steam releases post-LLMs. The tech has lowered one wall, but the taller walls remain. It’s like the rise of Unity in the 2010s: the engine democratized making games, but we didn’t see a proportional explosion of good game, just more attempts. LLMs are doing the same thing for code, and image models are starting to do it for art, but neither can tell you if your game is actually fun.
The interesting question to me is: what happens when AI can not only implement but also playtest -- running thousands of iterations of your loop, surfacing which mechanics keep simulated players engaged? That’s when we start moving beyond "AI as productivity hack" into "AI as collaborator in design." We’re not there yet, but this article feels like an early data point along that trajectory.
Will once again re-up the concept of a “right to root access”, to prevent big corps from pulling this bs over and over again: https://medhir.com/blog/right-to-root-access