Hacker Newsnew | past | comments | ask | show | jobs | submit | totallymike's commentslogin



I dismay at the possibility of this happening. What’s the point of an internet at all if one company controls, filters, and governs our entire usage of it?

I understand an argument can be made that google is doing similar, but at least you can still search and end up on an actual site, rather than just play telephone via chatgpt. This concept is horrifying for so many reasons.


I agree with the fact that a monopolized web is not friendlier to anyone. But seeing the trajectories of tech companies in the past decade, the unfortunate north star is distribution and the relentless pursuit of it.

Even in that dire circumstance, I wish that the web versions keep up/are maintained, instead of being slowly deprecated, which happened for a lot of mobile-native versions of applications.


> What’s the point of an internet at all

Going back to first principles, we need to recall that the internet is for the dissemination of cat pictures, and at the end of the day every technical and organizational change must be analyzed through the lens of its impact on the effective throughput of these pictures.


For at least the past year, subtitles under dubs have been horrendous. I’ve watched a handful of Gundam series over this period, and while the subtitles under the Japanese audio are usually fine, the captions that run under the English audio more often than not get every single proper noun completely wrong, and half the dialog in general.

A generous explanation would be that the localized subtitles under the Japanese audio are licensed for use with that audio only, but that’s pure conjecture, and even if that’s the case, there is no excuse for how terrible the captions can be.


Getting proper nouns wrong is a flaw I thought we left behind in the fansub era.

The official translator should in theory have the Japanese closed captioning and copies of the anime's original manga or light novel to work from, as well as a direct line to the original studio for clarifications on spelling. In practice, I suspect they aren't given enough resources (particularly time) to do this, and the exact romanization of fictional names is not always clear from the katakana or so. Lately there are so many fantasy series where characters have made-up European-sounding names which don't translate unambiguously from katakana - is it Chilchuck or Chilchack, for example?


This is a problem as well, but what I see often is what seems to be the cheapest speech recognition software they could find auto-transcribing the dub, and it falls over any time it meets a name or word it can’t guess out of like to 1,000 most common words in the English language.

Of course, I just went back to scrub for examples and either I am remembering incorrectly which shows demonstrated it most frequently or they’ve fixed Zeta Gundam in the spots I’ve checked.


It gets even worse, when the original mangaka typoes the name, and people follow a single typo like a religion. This happened with Kaoru Mori's "Emma", where a common English surname "Jones" was accidentally spelled "Jounse" by the author, and used in translations without questioning it too much, only later found to be written correctly "Jones" in a later chapter by the author herself.

I see this a lot, and it is a mild pet peeve of mine as well. Along the same lines, since I’m using Gundam as an example in this thread, I’ll point to a technology in the franchise called “psycommu” (pronounced in dubs as psy-com-moo) which is clearly transliterated from how it’s spelled in the original script without taking a second look at it. I can’t imagine why they wouldn’t have just localized it to “psy-com,” But here we are still calling it “psycommu” in recent series

Somewhat related is the situation with Shingeki no Kyojin / Attack on Titan. "Attack on Titan" was the manga author's chosen English title, except the story does not take place on the moon of Titan but is instead about giants attacking a human settlement. The Japanese title could be interpreted as "Attack of Titans" so everyone assumed "Attack on Titan" was just Engrish for that, and why CommieSubs' fan translation for example went with "The Eotena Onslaught". [1]

Years later it turned out some of the giants had classes / types and the title was a reference to the Attack type of giant. Thus the English title would've been better as "The Attack Titan", and indeed the Japanese title could also have been interpreted as that, though it's only obvious in hindsight. The Japanese title was likely deliberately intended to have the double-meaning "Attack of Titans" and "The Attack Titan", though this double-meaning cannot be conveyed in English, and in fact we're now stuck due to inertia with a third English rendering that is completely disconnected from either meaning.

[1]: https://commiesubs.com/shingeki-no-kyojin-01/


Most fans I saw called it by the Japanese title, Shingeki no Kyojin. You could almost tell whether someone watched the official licensed translation or not, based on what they called the series.

Actually, another quirk is the German lyric in the first season's opening theme. Crunchyroll doesn't usually translate opening or ending lyrics, but translating the lyrics was standard practice in the fansub era, so the. However, they misheard the lyric as "Sie sind das Essen und wir sind die Jäger" - "You are the food and we are the hunters" - as if the line is spoken by the Titans (perhaps the English-speaking audience is primed to the Germans being the bad guys in movies). The actual lyric was revealed in official Japanese sources as spoken from the perspective of the humans: "Seid ihr das Essen? Nein, wir sind die Jäger!" - "Are we the food? No, we are the hunters!" However, the incorrect lyric persists among fans because the second opening theme superceded the first before the error was widely noted in the English speaking anime community.


The dub subtitles should be different than the original language subtitles, given the dub script is not just the reading the subtitle track, but that’s not an excuse for the dub subtitles being bad.

I agree that technically this would be incorrect, but I’d still appreciate the option to choose the subtitle track from the original language over the horrible auto-generated subtitle track.

The guide mentions that Hetzner was chosen over other providers and platforms because they didn’t wish to get tied into a whole ecosystem, and could take this setup and move it more or less anywhere

I’ve seen you make this response to a couple different threads, and I wonder what you mean by it.

Are you just hoping to gain more insight on the differing proposed technologies and waiting for someone to give you more information, or are you expressing frustration that that people have their own opinions on which layers to use for their own setups?

If you’re simply asking for information on how to use docker, and how to adapt TFA to include it, you’re in luck. One can find many tutorials on how to dockerize a service (docker’s own website has quite a lot of excellent tutorials and documentation on this topic), and plenty of examples of how to harden it, use SSL, et cetera. This is a very well trodden path.

That said, I’m tempted to read your response with the latter interpretation and my response would be to observe that holding a different opinion on something isn’t inherently ungrateful, or rude, nor is it presumptuous to share that one would, say, recommend dockerizing the production app instead of deploying directly to the server.

That’s the nature of discourse, and the whole reason why hacker news has a comment section in the first place. A lovely article such as TFA is shared by someone, and then folks will want to talk about it and share their own insights and opinions on the contents. Disagreeing with a point in the article is a feature, not a bug.


You are reading too much into me. I am a noob and are interested in an opinion about a good tutorial. As you mentioned, I also asked on another thread and that dude was very friendly. Not so much luck here it seems, that people even downvote me, well, their karma.

I read into it because your tone was very much that of someone who feels entitled to other people’s effort and time, and you spammed the exact same comment all over the place.

You could have written “I’d love to learn more, do you have a tutorial or walkthrough that you found helpful?” or formulated the question in any other way that demonstrates a respect for the commenter’s time and any effort they may put into finding a tutorial they think you would enjoy.

“So, where is the walkthrough” implies (at least to me) that what you are really saying is “obviously you must have written a walkthrough, or else your comment has no value, so why haven’t you given it to me.” It reads like a challenge, and given the way you’re communicating now, I feel justified in this reading.

A simple question can still be rude, and yours definitely sounded rude. I tried to give you room to exercise the benefit of the doubt, but based on this and your other comment, you just are entitled. Have a nice day.


I said the same thing in two locations, not "all over the place". Now you want to tell me how to ask, as if that is any of your business. Lies and threats, that's all you know, you irrelevant person. Have a bad day.

(Downvotes do not affect the downvoters’ karma.)

Hahaha, I am talking of real karma.

I don't begrudge it being overridden here since this is a demo, but ever since, like, way early Opera era, swiping to navigate is muscle memory for me, and I prefer it both on desktop and mobile/tablet. Much simpler than reaching for the button.

i was never attracted to the gattaca UX, it's a UI pre-crime.

reaching is muscle memory for me. buttons i like because i know what i'm getting, and what i'm getting can be many different things as buttons allow.


Only bothering to mention it in response to one of many review comments is nearly the same as not disclosing it.


We might know the word "disclose" very different then. I'm amenable to taking issue with them not disclosing it up front, but then their guidelines - if the person above is to be believed - don't require it, and they did disclose it a few days after opening it. It was also not them responding to an allegation or anything, they disclosed it completely on their own terms. And that was two months ago.

I find that latter part particularly relevant, considering the hoopla is about AI bros being lazy dogs who can't be bothered to put in the hard work before attempting to contribute. Irony being then that the person above just took an intentionally cut short citation to paint the person in a somehow even more negative light than they'd have otherwise appeared in, while simultaneously not even bothering to review the conduct they're proposing to police to confirm it actually matches their knowingly uncharitable conjecture. Two wrongs not making a right or whatever.


I searched the commit message and the page github showed. That seems reasonable due diligence on my part. In particular, demanding lots of effort on the part of people to compensate for AI spam is rather at the root of why this trend is damaging.

It should be clear that my objection is to the mix of coc + ai in the context of llvm, not to this specific instance where someone is acting within the rules llvm has written down.


In the future I plan on disclosing the use of AI in the body of the original PR so it's clearer.


If nothing else, it gives maintainers a sign to point to when closing PRs with prejudice, and that's not nothing. Bad faith contributors will still likely complain when their PRs are closed, and having an obviously applicable policy to cite makes it harder for them to keep complaining without getting banned outright.


How deliciously entitled of you to decide that making other people try to catch ten tons of bullshit because you’re “learning quicker and can contribute at a level you otherwise couldn’t” is a tradeoff you’re happy to accept

If unrepentant garbage that you make others mop up at risk of their own projects’ integrity is the level you aspire to, please stop coding forever.


You can't comment like this on Hacker News, no matter what you're replying to. You've been on HN a long time and we've never had to warn you before, but please take a moment to read the guidelines and make an effort to observe them, especially these ones:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

https://news.ycombinator.com/newsguidelines.html


Go look at the PR man, it's pretty clear that he hasn't just dumped out LLM garbage and has put serious effort and understanding into the problem he's trying to solve.

It seems a little mean to tell him to stop coding forever when his intentions and efforts seem pretty positive for the health of the project.


One of resolved conversation contains a comment "you should warn about incorrect configuration in constructor, look how it is done in some-other-part-of-code."

This means that he did not put serious effort into understanding what, when and why others do in a highly structured project like LLVM. He "wrote" the code and then dumped "written" code into community to catch mistakes.


That is normal for a new contributor. You can't reasonably expect knowledge of all the conventions of the project. There has to be effort to produce something good and not overload the maintainers, I agree, but missing such a detail is not a sign that is not happening here.


Every hobby at some point turns into an exclusive, invitation-only club in order to maintain the quality of each individual's contribution, but then old members start to literally die and they're left wondering why the hobby died too. I feel like most people don't understand that any organization that wants to grow needs to sacrifice quality in order to attract new members.


Have you ever contributed to a very large project like LLVM? I would say clearly not from the comment.

There are pitfalls everywhere. It’s not so small that you can get everything in your head with only a reading. You need to actually engage with the code via contributions to understand it. 100+ comments is not an exceptional amount for early contributions.

Anyway, LLVM is so complex I doubt you can actually vibcode anything valuable so there are probably a lot of actual work in the contribution.

There is a reason the community didn’t send them packing. Onboarding new comer is hard but it pays off.


  > Have you ever contributed to a very large project like LLVM?
Oh, I did. Here's one: https://github.com/mariadb-corporation/mariadb-columnstore-e...

  > I would say clearly not from the comment.
Of course, you are wrong.

  > It’s not so small that you can get everything in your head with only a reading.
PSP/TSP recommends writing typical mistakes into a list and use it to self-review and to fix code before sending it into review.

So, after reading code, one should write down what made him amazed and find out why it is so - whether it is a custom of a project or a peculiarity of code just read.

I actually have such a list for my work. Do you?

  > You need to actually engage with the code via contributions to understand it. 100+ comments is not an exceptional amount for early contributions.
No, it is not. Dozens of comments on a PR is an exceptional amount. Early contributions should be small so that one can learn typical customs and mistakes for self review before attempting a big code change.

That PR we discuss here contains a maintainer's requirement to remove excessive commenting - PR's author definitely did not do a codebase style matching cleanup job on his code before submission.


The personal dig was unwarranted. I apologise.

> So, after reading code, one should write down what made him amazed and find out why it is so - whether it is a custom of a project or a peculiarity of code just read.

Sorry but that’s delusional.

The amount of people actually able to meaningfully read code, somehow identify what was so incredible it should be analysed despite being unfamiliar with the code base, maintain a list of their own likely error and self review is so vanishingly low it might as well not exist.

If that’s the bare a potential new contributor has to cross, you will get exactly none.

I’m personally glade LLVM disagree with you.


  >The amount of people actually able to meaningfully read code, somehow identify what was so incredible it should be analysed despite being unfamiliar with the code base, maintain a list of their own likely error and self review is so vanishingly low it might as well not exist.
List of frequent mistakes gets collected after contributions (attempts). This is standard practice for high quality software development and can be learned and/or trained, including on one's own.

LLVM, I just checked, does not have a formal list of code conventions and/or typical errors and mistakes. Could they have that list, we would not have the pleasure to discuss that. That PR we are discussing would be much more polished and there would be much less than several dozens of comments.

  > If that’s the bare a potential new contributor has to cross, you will get exactly none.
You are making very strong statement, again.


I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR. I also try to mitigate the code review burden by doing as much review as possible on my end & flagging what I don't understand.

If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.


> I didn't make a decision on the tradeoff, the LLVM community did. I also disclosed it in the PR.

That's not what the GP mean. Just because a community doesn't disallow something doesn't mean it's the right thing to do.

> I also try to mitigate the code review burden by doing as much review as possible on my end

That's great but...

> & flagging what I don't understand.

It's absurd to me that people should commit code they don't understand. That is the problem. Just because you are allowed to commit AI-generated/assisted code does not mean that you should commit code that you don't understand.

The overhead to others of committing code that you don't understand then ask someone to review is a lot higher than asking someone for directions first so you can understand the problem and code you write.

> If your project has a policy against AI usage I won't submit AI-generated code because I respect your decision.

That's just not the point.


> It's absurd to me that people should commit code they don't understand

The industrywide tsunami of tech debt arising from AI detritus[1] will be interesting to watch. Tech leadership is currently drunk on improved productivity metrics (via lines of code or number of PRs), but I bet velocity will slow down, and products be more brittle due to extraneous AI-generated, with a lag, so it won't be immediately apparent. Only teams with rigorous reviews will fare well in the long term, but may be punished in the short term for "not being as productive" as others.

1. From personal observation: when I'm in a hurry, I accept code that does more than is necessary to meet the requirements, or is merely not succinct. Where as pre-AI, less code would be merged with a "TBD" tacked on


I agree with more review. The reason I wrote the PR is because AI keeps using `int` in my codebase when modern coding guidelines suggest `size_t`, `uint32_t`, or something else modern.


Where did you disclose it?


Only after getting reviews so it is hidden by default: https://github.com/llvm/llvm-project/pull/146970#issuecommen...


Disclosing that you used AI three days after making the PR, after 4 people had already commented on your code, doesn't sit right with me. That's the kind of thing that should be disclosed in the original PR message. Especially so if you are not confident in the generated code


Sounds like a junior vibe coder with no understanding of software development trying to boost their CV. Or at least I hope that’s the case.


I graduated literally 3 months ago so that's my skill level.

I also have no idea what the social norms are for AI. I posted the comment after a friend on Discord said I should disclose my use of AI.

The underlying purpose of the PR is ironically because Cline and Copilot keep trying to use `int` when modern C++ coding standards suggest `size_t` (or something similar).


That's no different to on boarding any new contributor. I cringe at the code I put out when I was 18.

On top of all that every open source project has a gray hair problem.

Telling people excited about a new tech to never contribute makes sure that all projects turn into templeOS when the lead maintainer moves on.


Onboarding a new contributor implies you’re investing time into someone you’re confident will pay off over the long run as an asset to the project. Reviewing LLM slop doesn’t grant any of that, you’re just plugging thumbs into cracks in the glass until the slop-generating contributor gets bored and moves on to another project or feels like they got what they wanted, and then moves on to another project.

I accept that some projects allow this, and if they invite it, I guess I can’t say anything other than “good luck,” but to me it feels like long odds that any one contributor who starts out eager to make others wade through enough code to generate that many comments purely as a one-sided learning exercise will continue to remain invested in this project to the point where I feel glad to have invested in this particular pedagogy.


>Onboarding a new contributor implies you’re investing time into someone you’re confident will pay off over the long run as an asset to the project.

No you don't. And if you're that entitled to people's time you will simply get no new contributors.


I’ll grant you that, but at least a new contributor who actually writes the code they contribute has offered some level of reciprocity with respect to the time it takes to review their contributions.

Trying to understand a problem and taking some time to work out a solution proves that you’re actually trying to learn and be helpful, even if you’re green. Using a LLM to generate a nearly-thousand-line PR and yeeting it at the maintainers with a note that says “I don’t really know what this does” feels less hopeful.

I feel like a better use of an LLM would be to use it for guidance on where to look when trying to see how pieces fit together, or maybe get some understanding of what something is doing, and then by one’s own efforts actually construct the solution. Then, even if one only has a partial implementation, it would feel much more reasonable to open a WIP PR and say “is this on the right track?”


Not getting thousand line AI slop PRs from resume builders who are looking for a "LLVM contributor" bullet point before moving on is a net positive. Lack of such contributors is a feature, not a bug.

And you can't go and turn this around into "but the gate keeping!" You just said that expecting someone to learn and be an asset to a project is entitlement, so by definition someone with this attitude won't stick around.

Lastly, the reason that the resume builder wants the "LLVM contributor" bullet point in the first place is precisely because that normally takes effort. If it becomes known in the industry that getting it simply requires throwing some AI PR over the wall - the value of this signal will quickly diminish.


Unrelated to my other point, I absolutely get wanting to lower barriers, but let’s not forget that templeOS was the religious vanity project of someone who could have had a lot to teach us if not for mental health issues that were extant early enough in the roots of the project as to poison the well of knowledge to be found there. And he didn’t just “move on,” he died.

While I legitimately do find templeOS to be a fascinating project, I don’t think there was anything to learn from it at a computer science level other than “oh look, an opinionated 64-bit operating environment that feels like classical computing and had a couple novel ideas”

I respect that instances like it are demonstrably few and far between, but don’t entertain its legacy far beyond that.


> While I legitimately do find templeOS to be a fascinating project, I don’t think there was anything to learn from it at a computer science level other than “oh look, an opinionated 64-bit operating environment that feels like classical computing and had a couple novel ideas”

I disagree, actually.

I think that his approach has a lot to teach aspiring architects of impossibly large and complex systems, such as "create a suitable language for your use-case if one does not exist. It need not be a whole new language, just a variation of an existing one that smooths out all the rough edges specific to your complex software".

His approach demonstrated very large gains in an unusually complicated product. I can point to projects written in modern languages that come nowhere close to being as high-velocity as his, because his approach was fine-tuned to the use-case of "high-velocity while including only the bare necessities of safety."


I think the project and reviewers are both perfectly capable of making their own decisions about the best use of their own time. No need to act like a dick to someone willing to own up to their own behavior.


Your final sentence moved me. Moved to flagging the post, that is.


Well, some people just operate under the "some of you may die, but it's a sacrifice I am willing to make" principle...


Citations eight and nine amuse me.

I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.

That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.

Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.


> I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

> ChatGPT deserves no more or less empathy than a fork does.

I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.

But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.

It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.

So, I'll burn an extra token or two saying "please and thanks".


I do agree that just being nicer is a good idea, even when it's not required, and for largely the same reasons.

Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.


I believe there's also some research showing that being nice gets better responses. Given that it's trained on real conversations, and that's how real conversation works, I'm not surprised.


Hard to not recall a Twilight Zone and even a Night Gallery episode where those cruel to machines were just basically cruel people generally.


do you also beg your toilet to flush?


If it could hold a conversation I might.

I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.

Ergo, I might be more likely to treat you like a toilet.


Any "conversation" with a machine is dehumanizing.

Are you really in danger of forgetting the humanity of strangers because you didn't anthropomorphize a text generator? If so, I don't think etiquette is the answer


the thing is, though, that the text generator self-anthropomorphizes.

perhaps if an LLM were trained to be less conversational and more robotic, i would feel less like being polite to it. i never catch myself typing "thanks" to my shell for returning an `ls`.


> the thing is, though, that the text generator self-anthropomorphizes.

and that is why it must die!


alias 'thanks'="echo You\'re welcome!"


Words can change minds, it doesn't seem like a huge leap.

Your condescension is noted though.


It also makes the LLM work better. If you’re rude to it it won’t want to help as much.


I understand what you're saying, which is that the response it generates is influenced by your prompt, but feel compelled to observe that LLMs cannot want anything at all, since they are software and have no motivations.

I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.


if the primary mode of interaction with my toilet was conversational, then yeah, i'd probably be polite to the toilet. i might even feel a genuine sense of gratitude since it does provide a highly useful service.


> So, I'll burn an extra token or two saying "please and thanks"

I won't, and I think you're delusional for doing so


Interesting. I wonder if this is exactly an example of what the person you're responding to just now is saying. That being rude to an LLM has normalized that behavior such that you feel comfortable being rude to this person.


Eh, this doesn't strike me as wrong-headed. They aren't doing it because they feel duty-bound to be polite to the LLM, they maintain politeness because they choose to stay in that state of mind, even if they're just talking to a chatbot.

If you're writing prompts all day, and the extra tokens add up, I can see being clear but terse making a good deal of sense, but if you can afford the extra tokens, and it feels better to you, why not?


The prompts that I use in production are polite.

Looking at it from a statistical perspective: If we imagine text from the public internet being used during pretraining we can imagine, with few exceptions, that polite requests achieve their objective more often than terse or plainly rude requests. This will be severely muted during fine-tuning, but it is still there in the depths.

It's also easier in English to conjugate a command form simply by prefixing "Please" which employs the "imperative mood".

We have moved up a level in abstraction. It used to be punch cards, then assembler, then syntax, now words. They all do the same thing: instruct a machine. Understanding how the models are designed and trained can help us be more effective in that; just like understanding how compilers work can make us better programmers.


No time for a long reply, but what I want to write has video games at the center. Exterminate the aliens! is fine, in a game. But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.

(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)

What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.

If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.

So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.

I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.


> But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.

let me stop you right there. you're making a lot of assumptions about the shapes life can take. encountering and fighting a grey goo or tyrannid invasion wouldn't have a moral quality any more than it does when a man fights a hungry bear in the woods

it's just nature, eat or get eaten.

if we encounter space monks then we'll talk about morality


Sorry, I was unclear — that racism comment was tongue in cheek. Regardless of political leanings, I figured we can all agree that racism is bad!

I generally agree re:chatGPT in that it doesn’t have moral standing on its own, but still… it does speak. Being mean to a fork is a lot different from being mean to a chatbot, IMHO. The list of things that speak just went from 1 to 2 (humans and LLMs), so it’s natural to expect some new considerations. Specifically, the risk here is that you are what you do.

Perhaps a good metaphor would be cyberbullying. Obviously there’s still a human on the other side of that, but I do recall a real “just log off, it’s not a real problem, kids these days are so silly” sentiment pre, say, 2015.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: