Hacker Newsnew | past | comments | ask | show | jobs | submit | nowittyusername's commentslogin

Humans are increasingly living in more and more chaotic environments where they perceive lack of agency (more and more accurately so) and so we crave stability where we have more control. All successful escapes from reality provide that stability.

It's probably also a case of too much bad reality: here a climate crisis, there a corrupt <insert person of influence>, over there a war, next door refugees and everywhere poverty and inflation. And that's just on the front page.

Geeees, I think I need some escapism...


This is a very common sentiment I see everywhere and it really highlights how uneducated most people are about technology in general. Most folks seem to expect things to work magically and perform physics breaking feats and it honestly baffles me. I would expect this attitude from maybe the younger generations who grew up only being users of technology like tablets and smartphones, but I honestly never expected millennials to be in the same camp, but nope they are just as ignorant. And I am thinking to myself, did I grow up different? Were my friends also not using the same Nintendo cartridges, and VCR's and camcorders and all the other tech that you had no choice but to learn at least basic fundamentals to use? Apparently most people never delved deeper then surface level on how to use these things and everything else went right over their head...

Vonnegut in On Writing Science Fiction reflected on Player Piano being labeled sci-fi since it involved machines, "The feeling persists that no one can simultaneously be a respectable writer and understand how a refrigerator works, just as no gentleman wears a brown suit in the city"

> Apparently most people never delved deeper then surface level on how to use these things and everything else went right over their head...

This is really the truth of all things in life.


Plenty of people have a story of managers asking them to do impossible or nonsensical things. It should be unsurprising people will do the same with a machine.

> Most folks seem to expect things to work magically and perform physics breaking feats and it honestly baffles me

This is how it is being marketed and I guess people are silly enough to believe marketing so it's not too surprising


I've been thinking to fully switching to Linux for a while now. But there was just not enough pressure for me to switch over, I think this might do it. I think the universe is done whispering in to my ear and is finally shouting from the rooftops.

Yeah, it takes some work. I did that about a year ago. Super happy. All games through Steam work, for gog some require some tinkering but it's not too bad.

Do note what others have said about mods and some publishers' multiplayer games and music software. I am not affected but it's best to keep in mind.


Your post is making me think maybe there is quite a lot of lost knowledge out there somewhere that maybe has pertinence in modern day agentic AI system building use. I am currently experimenting in building my own AI system that uses LLM's as the "engine" but the "harness" around the said LLM will do most of the heavy lifting. It will have internal verification systems, grounding information , metadata, etc... And I find myself making a lot of automated scripts as part of that process as I have a personal motto that its always better to automate everything possible with scripts first and only use LLM's as a last resort or for things you can script away. And that is making me look more and more in to old techniques that have been long established way back when...

Well from what I remember from university most of expert systems went into bust because they were promising what ML today promise.

The maintenance of the rules or for you scripts for complex tasks is much more work than anyone is willing to commit to. Also big problem was finding out tacit knowledge and no one was able to code that reliably in.

ML today is promising you won’t have to hand code the rules you just push data and system finds out what the rules are and then can handle new data.

I don’t have to code the rules to check if there is a cat in the picture - that definitely works. Making rules on data that is not so often found on the internet that’s still going to be a hassle. Rules change and world change and for example knowledge cut off is I think still a problem.

In the end yes you can build nice system for some use case where you plugin LLM for classification and you most likely will make money on it. This just won’t be „what was promised” so AGI and we are stuck with this promise and a lot of people won’t accept less than that.


Your post is making me think maybe there is quite a lot of lost knowledge out there somewhere that maybe has pertinence in modern day agentic AI system building use.

I agree with that. In fact, that mindset is what led me to this book in the first place. I was exploring an older book on OPS5[1] and saw this book mentioned, and started looking for it and found that it is freely available online. Seemed like something the HN crowd might enjoy, so here we are.

And that is making me look more and more in to old techniques that have been long established way back when...

I suspect that there is some meat on that bone. I'm exploring this particular area as well. I think there's some opportunity for hybridization between LLM's / GenAI and some of these older approaches.

[1]: https://en.wikipedia.org/wiki/OPS5


I spent several years working with OPS5 in the 1980s. The Common Lisp code, especially the Rete network stuff, was fairly straight forward to modify and generally work with. Good times.

I wonder what would happen if you used an LLM to write the rules?

That's exactly what I am focusing on. IMO getting rid of the biggest pain points regarding script building but also benefitting from modern day AI systems. "Have your cake and eat it too." We know that scripts are far more reliable then LLM's, but to build a good complex script is a pain in the ass and takes tremendous effort in creating and maintaining/debugging. So we use leverage modern day generative systems and have them be the builders and maintainers of said scripts. So the most intensive part for a human now comes down to creating a robust AI system that is able to build scripts reliable and then use them in conjunction with its own generative capabilities. Basically teach the machine to create the tools it will use and maintain, after that drop the human from the loop.

I must say, at least for me personally when I hear about such levels of incompetence it rings alarm bells in my head making me think that maybe intentional malice was involved. Like someone higher up had set up the whole thing to happen in such a matter because there was a benefit to this happening we are unaware of. I think this belief maybe stems from lack of imagination on how really stupid humans can get.

Most people overestimate the prevalence of malice, und underestimate the prevalence of incompetence

What do you make of this? The guy who was in charge of restoring the system was found dead

https://www.thestar.com.my/aseanplus/aseanplus-news/2025/10/...


My guess would be that either he felt it was such a monumental cockup that he had to off himself or his bosses thought it was such a monumental cockup that they had to off him.

That's pretty nuts, they should make a mini documentary on how they made it and all the hurdles they had to overcome and all that jazz.

I agree, LLM's are capable of doing this right out of the box if you provide it grounding data like current time and a few other things in the system prompt. Its really odd that this is getting any attention.

You guys are so funny, when papers like these exist: https://arxiv.org/abs/2404.11757

Numerous research, INCLUDING the OpenTSLM paper has PROVEN they are NOT able to do this out of the box. Did you even check out the results at all? They literally compare OpenTSLM against standard text only baselines. Gemma3-270M performs better than GPT-4o using tokenized time series alone. Thus, I guess you guys are being ironic.


I understand how annoying it is when people post shallow dismissals of your work on the internet, but please don't give in to the annoyance when replying. It makes the thread worse, and it's against the HN guidelines: https://news.ycombinator.com/newsguidelines.html.

I don't know if this is your work or not, but I appreciate your wanting to defend it...we just need you to do that in a way that doesn't attack others, no matter how wrong they are or you feel they are. Easier said than done of course, but we're all working on it together.


An experiment is not a proof.

If this is the level of one of the contributors to the OpenTSLM paper (which you very obviously are), no wonder due diligence wasn't done properly.


It’s less about proof and more about demonstrating a new capability that TSLMs enable. To be fair, the paper did test standard LLMs, which consistently underperformed. @iLoveOncall, can you point to examples where out of the box models achieved good results on multiple time-series? Also, what kind of time-series data did you analyze with Claude 3.5? What exactly did you predict, and how did you assess reasoning capabilities?

Hi folks, I made a research tools that allows you to perform deterministic inference on any local large language model. This way you can test any variable changes and see for yourself the affects those changes have on the output of the LLM's response. It also allows you to perform automated reasoning benchmarking of a local language model of your choice, this way you can measure the perplexity drop of any quantized model or differences between reasoning capabilities of models or sampling parameters. It also has a fully automated way of converging on the best sampling parameters for a given model when it comes to reasoning capabilities. I made 2 videos for the project so you can see what its about at a glance the main guide is here https://www.youtube.com/watch?v=EyE5BrUut2o, the instillation video is here https://youtu.be/FJpmD3b2aps and the repo is here https://github.com/manfrom83/Sample-Forge. If you have more questions id be glad to answer them here. Cheers.

your comment reminds me of another one i saw on reddit. someone said they found that using github diff as a way to manage context and reference chat history worked the best for their ai agent. i think he is on to something here.

I've used Claude Code for about 3 months now. Was a big fan until recent changes lobotomized it. So I switched over to codex about 2 weeks ago and loving it so far, way better experience. Today with the introduction of the new model, i been refactoring old claude code project all day and so far things are looking good. I am very impressed, OpenAI cooked hard here...


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: