The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it.
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
JSON Structured Output from OpenAI was released a year after the first LangChain release.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword.
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
> I prefer reading the LLM output for accessibility reasons.
And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.
However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?
However, there will be cases where lacking the LLM output, there isn't any output at all.
Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.
Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.
You wouldn't complain as much if it were merely poorly written by a human. It gets the information across. The novelty of complaining about a new style of bad writing is being overdone by a lot of people, particularly on HN.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.
Not everyone speaks English natively.
Not everyone has taste when it comes to written English.
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
More importantly though, the sheer amount of this complaint on HN has become a great reason not to show up.
And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.
However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?
However, there will be cases where lacking the LLM output, there isn't any output at all.
Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.
Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.