This is definitely out of my comfort zone. I've never programmed DSP before. But I was able to use Claude code and have it help me build this using CMajor.
I just wanted to show you guys because I'm super proud of it. It's a 100% faithful recreation based off of the schematics, patents, and ROMs that were found online.
So please watch the video and tell me what you think
The reason why I think this is relevant is because I've been a programmer for 25 years and AI scares the shit out of me.
I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!
Thanks!
There was something exciting about sleuthing out how those old machines worked: we used a black box approach, sending in test samples, recording the output, and comparing against the digital algorithm’s output. Trial and error, slowly building a sense of what sort of filter or harmonics could bend a waveform one way or another.
I feel like some of this is going to be lost to prompting, the same way hand-tool woodworking has been lost to power tools.
While there is something lost in prompting, people will always seek out first-principles so they can understand what they are commanding and controlling, especially as old machines become new machines with new capabilities not even imaginable before due to the old software complexity wall.
But smart programmers will realize the world doesn't care about any of that at all.
People are already struggling to find work from oversupply of talent and not enough demand
Way way way less developers.
Given, a lot of it is framing houses and remodeling. And there are fewer jobs in hardwood furniture these days. But a lot of that is because US furniture manufacturing was moved to China 20 years ago, not because of power tools.
If anything, the advent of power tools in the 40s/50s made single family homes more affordable and increased construction demand.
Though using AI to build the devtools we used for signal analysis would have been helpful.
> I'm not a programmer anymore. I'm something else now. I don't know what it is but it's multi-disciplinary, and it doesn't involve writing code myself--for better or worse!
Yes, I agree. I think the role of software developer is going to evolve into much more of an administrative, managerial role, dealing more with working with whatever organisation you're in than actually typing code. Honestly I think it probably was always heading in this direction but it's definitely quite a step change. Wrote about it a little incoherently on my blog just this morning: https://redfloatplane.lol/blog/11-2025-the-year-i-didnt-writ...
AI is a force multiplier - it makes bad worse, it _can_ make good better. You need even more engineering disciplines than before to make sure it's the latter and not the former. Even with chaining code quality MCP's and a whole bunch of instructions in AGENTS.md, there's often a need to intervene and course adjust, because AI can either ignore AGENTS.md, or because whatever can pass code quality checks does not always mean the architecture is something that's solid.
That being said, I do agree our job is changing from merely writing code, to more of a managerial title, like you've said. But, there's a new limit - your ability to review the output, and you most definitely should review the output if you care about long-term sustainable, quality software.
If software becomes cheaper to make it amortizes at a higher rate, ie, it becomes less valuable at a faster clip. This means more ephemeral software with a shorter shelf-life. What exactly is wrong with a world where software is borderline disposable?
I’ve been using Photoshop since the 90s and without having watched the features expand over the years I don’t think I would find the tool useful for someone without a lot of experience.
This being said, short-lived and highly targeted, less feature-full software for image creation and manipulation catered to the individual and specific to an immediate task seems advantageous.
Dynamism applied not to the code but to the products themselves.
Or something like that.
I'll update the post to reflect the reality, thanks for calling it out.
I completely agree with your comment. I think the ability to review code, architecture, abstractions matters more than the actual writing of the code - in fact this has really always been the case, it's just clearer now that everyone has a lackey to do the typing for them.
but AI being solely a force multiplier is not accurate, it is a intelligence multiplier. There are significantly better ways now to apply skills and taste with less worry about technical debt. AI coding agents have gotten to the point that it virtually removes ALL effort barrierrs even paying off technical debt.
While it is still important to pay attention to the direction your code is being generated, the old fears and caution we attributed to previous iteration of AI codegen is largely being eroded and this trend will continue to the point where our "specialty" will no longer matter.
I'm already seeing small businesses that laid off their teams and the business owner is generating code themselves. The ability to defend the thinning moat of not only software but virtually all white collar jobs is getting tougher.
I tried this for laughs with Gemini 3 Pro. It spit out the same ZDF implementation that is on countless GitHub repos, originating from the 2nd Pirkle FX book (2019).
Since there is a Ursa Major project on github, made by an owner, who reimplemented this also based on observation, made into a plugin, I wonder how much was regurgitated by the AI agent.
If you look at my other comment in this thread, my project is about designing proprioceptive touch sensors (robot skin) using a soft-body simulator largely built with the help of an AI. At this stage, absolute physical accuracy isn’t really the point. By design, the system already includes a neural model in the loop (via EIT), so the notion of "accuracy" is ultimately evaluated through that learned representation rather than against raw physical equations alone.
What I need instead is a model that is faithful to my constraints: very cheap, easily accessible materials, with properties that are usually considered undesirable for sensing: instability, high hysteresis, low gauge factor. My bet is that these constraints can be compensated for by a more circular system design, where the geometry of the sensor is optimized to work with them.
Bridging the gap to reality is intentionally simple: 3D-print whatever geometry the simulator converges to, run the same strain/stress tests on the physical samples, and use that data to fine-tune the sensor model.
Since everything is ultimately interpreted through a neural network, some physical imprecision upstream may actually be acceptable, or even beneficial, if it makes the eventual transfer and fine-tuning on real-world data easier.
This honestly though does not help me to estimate if what you claim to be is what it is. I'm not necessarily the audience for either project but my point remains :
- when somebody claims to recreate something, regardless of why and how, it helps to understand how close they actually got.
It's not negative criticism by the way. I'm not implying you did not faithfully enough recreate the DSP (or the other person the NES). I'm only saying that for outlookers, people like me who could be potentially interested, who do NOT have a good understanding of the process nor the initial object recreated, it is impossible to evaluate.
I do understand your point, and I think it’s a fair one: when someone claims to "recreate" something, it really helps readers to know how close the result is to the original, especially for people who don’t already understand the domain.
I was mostly reacting to the idea that faithfulness always has to be the primary axis of evaluation. In practice, only a subset of users actually care about 100% fidelity. For example with DSP plugins or NES emulators, many people ultimately judge them by how they sound or feel, especially when the original artifact is aesthetic in nature.
My own case is a bit different, but related. Even though I’m working on a sensor, having a perfectly accurate physical model of the material is secondary to my actual goal. What I’m trying to produce is an end result composed of a printable geometry, a neural model to interpret it, and calibration procedures. The physics simulator is merely a tool, not a claim.
In fact, if I want the design to transfer well from simulation to reality, it probably makes more sense to intentionally train the model across multiple variations of the physics rather than betting everything on a single "accurate" simulator. That way, when confronted with the real world, adaptation becomes easier rather than harder.
So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric.
One reason I find my case illuminating is that it makes the "what metric are we optimizing?" question very explicit.
Sure, I can report proxy metrics (e.g. prediction error between simulated vs measured deformation fields, contact localization error, force/pressure estimation error, sensitivity/resolution, robustness across hysteresis/creep and repeated cycles). Those are useful for debugging.
But the real metric is functional: can this cheap, printable sensor + model enable dexterous manipulation without vision – tasks where humans rely heavily on touch/proprioception, like closing a zipper or handling thin, finicky objects – without needing $500/sq-inch "microscope-like" tactile sensors (GelSight being the canonical example)?
If it gets anywhere close to that capability with commodity materials, then the project is a success, even if no single simulator configuration is "the" ground truth.
What could OP’s next move be? Designing and building their own circuit. Likewise, someone who built a NES emulator might eventually try designing their own console. It doesn’t feel that far-fetched.
So on "So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric." reminds me of https://en.wikipedia.org/wiki/Goodhart%27s_law
That being said as OP titled " I used AI to recreate X" then I expect I would still argue that the audience has now expectation that whatever OP created, regardless of why and how, should be relatively close to X. If people are expert on X then they can probably figure out quite quickly if it is for them "close enough" but for others it's very hard.
So on "So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric." reminds me of https://en.wikipedia.org/wiki/Goodhart%27s_law
That being said as OP titled " I used AI to recreate X" then I would still argue that the audience has now expectation that whatever OP created, regardless of why and how, should be relatively close to X. If people are expert on X then they can probably figure out quite quickly if it is for them "close enough" but for others it's very hard.
Indeed but to be fair I'm not sure anybody claimed much "creativity" only that it worked... but that itself is still problematic. What does it mean to claim it even manage to implement an alternative if we don't have an easy way to verify?
And I'm not doing it based off of my ears. I know the algorithm, have the exact coefficients, and there was no guesswork except for the potentiometer curves and parts of the room algorithm that I'm still working out, which is a completely separate component of the reverb.
But when I put it up for sale, I'll make sure to go into detail about all that so people who buy it know what they're getting.
Consider reaching out to Audiority - I know they have some virtual recreations of Space Station hardware.
https://www.audiority.com/shop/space-station-um282
The video claims: "It utilizes the actual DSP characteristics of the original to bring that specific sound back to life." The author admits they have never programmed DSP. So how are they verifying this claim?
Also its target market is not a technical crowd but people who make music. I'm optimizing more for what they want to see (which are sound demos) rather than what a programmer would want to see.
I had a small epiphany a couple of weeks ago while thinking about robot skin design: using conductive 3D-printed structures whose electrical properties change under strain, combined with electrical impulses, a handful of electrodes, a machine-learning model to interpret the measurements, and computational design to optimize the printed geometry.
While digging into the literature, I realized that what I was trying to do already has a name: proprioception via electrical impedance tomography. It turns out the field is very active right now.
https://www.cam.ac.uk/stories/robotic-skin
That realization led me to build a Bergström–Boyce nonlinear viscoelastic parallel rheological simulator using Taichi. This is far outside my comfort zone. I’m just a regular programmer with no formal background in physics (apart from some past exposure to Newton-Raphson).
Interestingly, my main contribution hasn’t been the math. It’s been providing basic, common-sense guidance to my LLM. For example, I had to explicitly tell it which parameters were fixed by experimental data and which ones were meant to be inferred. In another case, the agent assumed that all the red curves in the paper I'm working with referred to the same sample, when they actually correspond to different conducting NinjaFlex specimens under strain.
Correcting those kinds of assumptions, rather than fixing equations, was what allowed me to reproduce the results I was seeking. I now have an analytical, physics-grounded model that fits the published data. Mullins effect: modeled. Next up: creep.
We’ll see how far this goes. I’ll probably never produce anything publishable, patentable, or industrial-grade. But I might end up building a very cheap (and hopefully not that inaccurate), printable proprioceptive sensor, with a structure optimized so it can be interpreted by much smaller neural networks than those used in the Cambridge paper.
If that works, the gesture will have been worth it.
What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.
Sometimes it feels like I'm living in a different world, reading the scepticism on here about AI.
I'm sure there enterprise cases where it doesn't make sense, but for the your everyday business owner it's amazing what can be done.
maybe it's a failure of imagination but I can't imagine a world where this doesn't impact enterprise in short order
I work for a large org and maintenance hell is my job, so I see both sides I think.
to know this you need to know what processes these businesses have been using for the past decade to run real full time business with full time staff. for example, you don't know just how bad the prior systems were, that the self built systems replaced.
with all due respect you don't have all the info to make the calculation on my world. just as I don't have it for yours.
the same tool that helped me build our systems, is not going to be the same tool that helps you maintain your large code base. But my point is, that I'm on the front line of change, and my guess is it's not going to be limited to my size of business. I don't know what your tool will look like, but I'd bet it's coming
I've been doing this for 25 years and I can tell you that the AI is a better coder than me, but I know how to use it. I reviewed the code that it puts out and it's better. I'm assuming the developers that are having a hard time with it are just not as experienced with it.
If you think your job is going to stay programmer, I just don't see it. I think you need to start providing value and using coding as just a means to do that, more so than coding being valuable in itself. It's just not as valuable anymore.
Consequently here for me to better understand how special this is I'd appreciate how (especially since I don't see a link to code itself) how does one go to e.g. https://cmajor.dev/docs/GettingStarted#creating-your-first-p... to a working DSP.
[1] https://www.youtube.com/watch?v=GcdyOtO5Id0
Use AI like a CNC machinist uses a mill. You're still in the loop, but break it into manageable "passes" with testing touchpoints. These touchpoints allow you to understand what's going on. Nothing wrong with letting AI oneshot something, but it's more fun and less ennui to jump in and look around and exercise some control here and there. And, on larger systems, this is basically required. (for now, perhaps).
This is how I do it now: https://jodavaho.io/posts/ai-useage-2025.html
Regarding your own titleing: you are now some type of "platform operator/manager" of this agents :-))
*edit* well duh, it’s the same guy!
[see https://news.ycombinator.com/item?id=45988611 for explanation]
It's not able to one-shot it yet but I'm sure that's coming this year sometime. I did the UI a hundred percent by myself and I went in there and tweaked it and tried to rebuild it and just try to understand how reverb works etc. I also did a lot of the software licensing just because I have experience with that.
What is the point of asking it to teach you something to "understand it" if Claude can just do it for you? This is the real question everyone should be asking beyond just employment (employment will definitely change in the coming months, no doubt). I would pivot away from programming personally.
Pretty soon AI will do the QA portion as well. It will generate any piece of software, even games for a cool $200/month from a vendor of choice: Microsoft (OpenAI) or Google
Companies will stop paying for SaaS or complex ERP software, they will just generate their own that only the AI knows how to maintain, run, and add features.
It's ironic that software developers are the most enthusiastic about automating their jobs out of existence. No union, no laws that interfere with free market forces.