35 comments

  • haberman 1 day ago
    I love the concept -- I've often wished that lean languages like Lua had more support for static typing, especially given the potential performance benefits.

    I also love the focus on performance. I'm curious if you've considered using a tail call design for the interpreter. I've found this to be the best way to get good code out of the compiler: https://blog.reverberate.org/2021/04/21/musttail-efficient-i... Unfortunately it's not portable to MSVC.

    In that article I show that this technique was able to match Mike Pall's hand-coded assembly for one example he gave of LuaJIT's interpreter. Mike later linked to the article as a new take for how to optimize interpreters: https://github.com/LuaJIT/LuaJIT/issues/716#issuecomment-854...

    Python 3.14 also added support for this style of interpreter dispatch and got a modest performance win from it: https://blog.reverberate.org/2025/02/10/tail-call-updates.ht...

    • beariish 1 day ago
      I did experiment with a few different dispatch methods before settling on the one in Bolt now, though not with tailcalls specifically. The approach I landed on was largely chosen cause it in my testing competes with computed goto solutions while also compiling on msvc, but I'm absolutely open to try other things out.
      • mananaysiempre 20 minutes ago
        There’s one thing that tail calls do that no other approach to interpreters outside assembly really can, and that is decent register allocation. Current compilers only ever try to allocate registers for a function at a time, and somehow that invariably leads them to do a bad job when given a large blob of a single intepreter function. This is especially true if you don’t isolate your cold paths into separate functions marked uninlineable (and preferably preserve_all or the like). Just look at the assembly and you’ll usually find that it sucks.

        (Whether the blob uses computed gotos or loop-switch is less important these days, because Clang for example is often smart enough to actually replicate your dispatch in the loop-switch case, avoiding the indirect branch prediction problem that in the past meant computed gotos were preferable. You do need to verify that this optimization actually happens, though, because it can be temperamental sometimes[1].)

        By contrast, tail calls with the most important interprerer variables turned into function arguments (that are few enough to fit into registers per the ABI—remember to use regparm or fastcall on x86-32) give the compiler the opportunity to allocate registers for each bytecode’s body separately. This usually allows it to do a much better job, even if putting the cold path out of line is still advisable. (Somehow I’ve never thought to check if it would be helpful to also mark those functions preserve_none on Clang. Seems likely that it would be.)

        [1] https://blog.nelhage.com/post/cpython-tail-call/

      • nolist_policy 21 hours ago
        Take look at the Nostradamus Distributor:

        http://www.emulators.com/docs/nx25_nostradamus.htm

      • UncleEntity 1 day ago
        From my research into the subject the easiest way to implement it would be a 'musttail' macro which falls back to a trampoline for compilers which don't support it. The problem then becomes having the function call overhead (assuming the compiler can't figure out what's going on and do tail-call optimizations anyway) on the unsupported systems with each and every opcode which is probably slower than just a Big Old Switch -- which, apparently, modern compilers are pretty good at optimizing.

        The VM I've been poking at is I/O bound so the difference (probably) isn't even measurable over the overhead of reading a file. I went with a pure 'musttail' implementation but didn't do any sort of performance measurements so who knows if it's better or not.

    • debugnik 1 day ago
      You may be interested in Luau, which is the gradually-typed dialect of Lua maintained by Roblox. The game Alan Wake 2 also used it for level scripting.
    • summerwant 1 day ago
      I see lua, do you know terralang?
  • perlgeek 1 day ago
    I like 99% of this, and the thing I don't like is in the very first line of the example:

    > import abs, epsilon from math

    IMHO it's wrong to put the imported symbols first, because the same symbol could come from two different libraries and mean different things. So the library name is pretty important, and putting it last (and burying it after a potentially long list of imported symbols) just feels wrong.

    I get that it has a more natural-language vibe this way, but put there's a really good reason that most of the languages I know that put the package/module name first:

        import packageName.member; // java
        from package import symbol; # python
        use Module 'symbol'; # perl
        
    With Typescript being the notable exception:

        import { pi as π } from "./maths.js";t
    • jasonjmcghee 1 day ago
      Also autocomplete.

      Though I almost never manually type out imports manually anymore.

    • bbkane 1 day ago
      I really like the way Elm does it, from "wide" (package) to "narrow" (symbol). I suspect this also helps language server implementation.

      See https://guide.elm-lang.org/webapps/modules (scroll down to "Using Modules") for examples

    • beariish 1 day ago
      Do you think approaching the way typescript does it for Bolt is a reasonable compromise here? Bolt already supports full-module renames like

          import math as not_math
      
      So supporting something along the lines of

          import abs as absolute, sqrt as square_root from math
      
      Would be farily simple to accomplish.
      • WorldMaker 10 hours ago
        The OP seems to be asking for the Python order of the import statement because it allows for simpler auto-completion when typing it:

            from math import square_root as sqrt, abs as absolute
            from math import * as not_math
        
        In a format like this, your language service can open up `math` immediately after the `from math` and start auto-completing the various types inside math on the other side of the `import`.

        Whereas the `import abs from math` often means you type `import` have no auto-complete for what comes next, maybe type ` from math` then cursor back to after the import to get auto-completion hints.

        It's very similar to the arguments about how the SQL syntax is backwards for good auto-complete and a lot of people prefer things like PRQL or C# LINQ that take an approach like `from someTable where color = 'Red' select name` (rather than `select name from someTable where color = 'Red'`).

      • pepa65 1 day ago
        Or: `import math with abs as absolute, sqrt as square_root`
        • derdi 19 hours ago
          Oooh, bikeshedding! To me your `import math with x as y` reads like "import all of math, making all of its symbols visible, just renaming some of them". That's different from the intended "from math, import only x (maybe with a renaming)".
      • cess11 18 hours ago
        Why?

        Put the category first so it makes it easy to skim and sort dependencies. You're never going to organise your dependencies based on what the individual functions, types or sub-packages are called, and sorting based on something that ends up in a more or less random place at the end of a line just seems obtuse.

    • vhodges 1 day ago
      According to the Programming Guide, it supports aliases for imports

      "In case of conflict or convenience, you can give modules an alias as well."

    • Tokumei-no-hito 1 day ago
      can't the compiler process it in reverse?
      • masklinn 13 hours ago
        The compiler doesn’t care either way, this is for the human reader’s benefit.
  • MobiusHorizons 1 day ago
    FYI "the embedded scene" is likely to be interpreted as "embedded systems" rather than "embedded interpreters" even by people who know about embedded interpreters, especially since all the languages you give as an example have been attempted for use on those targets (micropython, lua, and even typescript)
    • beariish 1 day ago
      That's a good point, thank you. I've made a small edit to clarify.
    • RossBencina 1 day ago
      True. I misread it as being for embedded, especially with the term "real-time" in the mix. Then later when there was no ARM or RISC-V support I became very confused.
  • conaclos 1 day ago
    I was quite excited by the description and then I noted that Bolt heavily relies on double floating point numbers. I am quite disappointed because this doesn't allow me to use Bolt in my context: embedded systems where floating point numbers are rarely supported... So I realized that I misinterpreted `embedded`.
    • devmor 1 day ago
      Same here! It's very cool but my ideal use case would be on a limited ISA architecture like ESP32.
      • nativeit 16 hours ago
        Bolt doesn’t support ARM or RISC. There’s some comments above re: the confusion with the term “embedded” and “real time”.
        • devmor 16 hours ago
          I think you might have misread our comments, that is exactly what we are lamenting.
  • megapoliss 23 hours ago
    Run some examples, and it looks like this "High-performance, real-time optimized, super-fast" language is

      ~ 10 times slower than luajit
      ~ 3 times slower than lua 5.4
    • jeroenhd 21 hours ago
      Not bad for version 0.1.0. lua(jit) is no slowpoke and has had decades of performance improvements.
    • johnisgood 18 hours ago
      With this and https://github.com/Beariish/bolt/blob/main/doc/Bolt%20Perfor..., it is indeed confusing without testing it out myself.

      That said, somehow I do not believe it is faster than LuaJIT. We will see.

      • megapoliss 17 hours ago
        I used brainfuck interpreter https://github.com/Beariish/bolt/blob/main/examples/bf.bolt vs lua / luajit implementations from https://github.com/kostya/benchmarks

        Just checked with nbody:

          - still 10 times slower than luajit
          - 2 times slower than luajit -joff
          - but 20% faster than lua 5.4
          - but uses 47 Mb RAM vs 2.5 Mb for lua/luajit
        • beariish 17 hours ago
          I appreciate the followup here. The brainfuck interpreter isn't meant to be a benchmark notably, it's a naive implementation for the sake of the example.

          I did spot some poor code in the Bolt version of nbody that can be changed (the usage of `.each()` in the hot loop is creating loads of temporary iterators, that's the memory difference.)

          luajit -joff does perform better even with this change, but I observe closer to 15% than a 2x difference

          • megapoliss 14 hours ago
            for nbody 500000 on my i5-9300H CPU @ 2.40GHz

              - 487.41 millis / 2364 kb ram for luajit -joff
              - 770.17 millis / 41712 kb ram for bolt
            
              770.17 / 487.41 ~~ 1.58 cpu, not 2x, but not 15% either
              41712 / 2364 ~~ 17.64 ram
    • amai 18 hours ago
      Where do you get these numbers from? Looking at https://github.com/Beariish/bolt/blob/main/doc/Bolt%20Perfor... doesn’t seem to support these numbers.
    • ModernMech 15 hours ago
      They at least clarified it by saying "outperforming other languages in its class". It's a slow class so the bar is low.
  • banginghead 1 day ago
    This looks so familiar that it got me thinking: who is collating all of the languages that are being invented? I must see two dozen a year on HN. I'm not dissing OP, but I've seen so many languages I'm not sure if I'm having deja vu, or vuja de.
  • JonChesterfield 1 day ago
    Outperforming languages in its class is doing some heavy lifting here. Missing comparison to wasm interpreter, any of the java or dot net interpreters, the MLs, any lisps etc.

    Compile to register bytecode is legitimate as a strategy but its not the fast one, as the author knows, so probably shouldn't be branding the language as fast at this point.

    It might be a fast language. Hard to tell from a superficial look, depends on how the type system, alias analysis and concurrency models interact. It's not a fast implementation at this point.

    > This means functions do not need to dynamically capture their imports, avoiding closure invocations, and are able to linearly address them in the import array instead of making some kind of environment lookup.

    That is suspect, could be burning function identifiers into the bytecode directly, not emitting lookups in a table.

    Likewise the switch on the end of each instruction is probably the wrong thing, take a look at a function per op, forced tailcalls, with the interpreter state in the argument registers of the machine function call. There's some good notes on that from some wasm interpreters, and some context on why from luajit if you go looking.

    • phire 1 day ago
      The class is "embeddable interpreted scripting language", which is not quite the same thing as just an interpreter.

      Embedded interpreters are that designed to be embedded into a c/c++ program (often a game) as a scripting language. They typically have as few dependencies as possible, try to be lightweight and focus on making it really easy to interopt between contexts.

      The comparison hits many of the major languages for this usecase. Though it probably should have included mono's interpreter mode, even if nobody really uses it since mono got AoT

  • eqvinox 18 hours ago
    Why repeat JavaScript's mistake of having all numbers be floats, with no integer type? I thought that one's well known by now :(
  • cookiengineer 1 day ago
    If functions don't have a return signature, does that mean everything must be satisfied in the compilation step?

    What about memory management/ownership? This would imply that everything must be copy by value in each function callsite, right? How to use references/pointers? Are they supported?

    I like the matchers which look similar to Rust, but I dislike the error handling because it is neither implicit, and neither explicit, and therefore will be painful to debug in larger codebases I'd imagine.

    Do you know about Koka? I don't like its syntax choices much but I think that an effect based error type system might integrate nicely with your design choices, especially with matchers as consumers.

    [1] https://koka-lang.github.io/koka/doc/index.html

    • driggs 1 day ago
      > If functions don't have a return signature, does that mean everything must be satisfied in the compilation step?

      Functions do have a return signature.

      It looks like the author chose to show off the feature of return type inference in the short example README code, rather than the explicit case.

      https://github.com/Beariish/bolt/blob/main/doc/Bolt%20Progra...

    • zygentoma 1 day ago
      Oh, not OP, but I love Koka. I should play around with it again thanks for reminding me!
  • themonsu 1 day ago
    Looks cool, but please can we stop naming things ”bolt”
    • IAmLiterallyAB 1 day ago
      Yeah this is the third programming language named Bolt that I'm aware of
      • ModernMech 15 hours ago
        I'm aware of 4 now. Here are the other 3:

        Bolt: a language with in-built data-race freedom! (recent discussion: https://news.ycombinator.com/item?id=23122973) - https://github.com/mukul-rathi/bolt

        Bolt: A programming language for rapid application development - https://github.com/boltlang/Bolt

        BOLT: a programming language that was desinged for begining programmers who have never seen code before in their life - https://sourceforge.net/projects/boltprogramming/files

        I'm very much in favor of authors choosing unique names for programming languages because there's still plenty of good names up for grabs without having to step on someone's toes. If the project is dead, that's one thing; the data-race one was a research project and hasn't had any activity in 5 years. BOLT last modified in 2014.

        But beariish/bolt and boltlang/bolt were started in the same year and are still under active development. With boltlang/bolt obviously snagging the namespace first, I think they should have claim to the name for now. That said, neither seems to have registered any domains, so whoever gets bolt-lang.org/com/net first will probably have an easier time defending a claim.

      • CyberDildonics 1 day ago
        Not to mention the facebook after compilation optimizer called bolt.
  • freeopinion 1 day ago
    I see your benchmarks compare against other interpreted languages "in its class".

    We read here a couple days ago about Q which is compiled. Bolt claims to "plow through code at over 500kloc/thread/second". Q claims to compile in milliseconds--so fast that you can treat it like a script.

    Bolt and Q are both newborns. Perhaps you could include each other in your benchmarks to give each other a little publicity.

  • slmjkdbtl 1 day ago
    Wow a fast scripting language with normal syntax and type checking.. Really hope this can take off and be a valid Lua alternative.
  • wk_end 1 day ago
    I might be missing this, but I'm not seeing anything about how the type system handles (or doesn't) polymorphism - generics, traits, that sort of thing. Is that in there?
  • npn 18 hours ago
    I don't understand why people still choose the syntax `import xxx from yyy` in the current year. It is a major source of complaining for languages like python or javascript, because it makes autocomplete does not work well.

    make me instantly lost interest in the language.

    • nativeit 16 hours ago
      Two potential suggestions:

      1. Ask - the author is very much available, right here in this comment section they made specifically for such a prospect. 2. Contribute - Code the change you wish to see in the world. Follow the OP’s example, and do something about it.

    • naasking 14 hours ago
      > It is a major source of complaining for languages like python or javascript

      Dynamically typed languages have more difficulties with autocomplete in general, Bolt is statically typed so you shouldn't automatically assume the same difficulties carry over.

  • tekkk 20 hours ago
    Really impressive, great job! I was interested to see how you had solved Result type and that seems quite developer-friendly—no wrappers just value & error union. I should try it out to see how it's to write if I can run it on ARM64. I wish Godot Script looked like this.
  • sureglymop 17 hours ago
    Super cool! Just now I am building something where I am trying to use mlua to make it scriptable with Lua. But the biggest pain point right now is trying to generate type annotations for LuaLS based on my rust structs. I will look into whether bolt could be interesting for the project.
  • memophysic 1 day ago
    Looks great, I'm especially a fan of the more C and Python syntax. Lua works, but its syntax has always been bugging me.

    On the feature side, is there any support for breakpoints or a debugging server, and if not is it planned?

    • beariish 1 day ago
      There's nothing like that right now, but it's absolutely something I want to explore in the future.
  • Fraterkes 1 day ago
    Really cool! Roughly how much memory does it take to include it in an engine? Also I'm really interested in the process of creating these really fast scripting languages, have you written anything about how you wrote Bolt?
    • beariish 1 day ago
      Bolt's memory usage in most cases hovers right around Lua 5.4/Luau in my own testing, but maybe I should include a few memory benchmarks to highlight that more. It does notably have a higher memory overhead during compilation than other languages in this class though.

      As for writeups, I'm working on putting out some material about the creation of Bolt and my learnings now that it's out there.

  • tayistay 1 day ago
    Congrats! I think this could be quite useful for me.

    I noticed that `let`-declared variables seem to be mutable. I'd strongly recommend against that. Add a `var` keyword.

  • Warwolt 20 hours ago
    Looks nice! Is there any plans on a language server and formatting tooling?

    Usually I feel like that's bare minimum before I'd like to try and play around with a language

  • astatine 1 day ago
    This looks awesome. Would you have any data on the performance of large number of invocations of small scripts? I am wondering at startup overhead for every script run. which the 500kloc/s may not capture well.
    • beariish 1 day ago
      It depends on your exact usecase, I'm not 100% sure what you're asking. There is some overhead for invoking the compiler on a per-script basis. If you're parsing once but running a script many times, Bolt provides some tools (like reusing a preallocated thread object) to ammortize that cost
      • astatine 1 day ago
        We have a server which uses Lua based script plugins. They are usually a few hundred to a few thousand lines and get invoked via APIs. I was trying to figure out how Bolt will behave in such a context and whether we could replace the Lua based plugin engine with this.
  • je42 1 day ago
    Nice language! I am wondering how error handling / exceptions works in bolt?

    Quickly scanned the programming guide - but wasn't able to find it. Did i miss a section?

  • merksoftworks 1 day ago
    This is really cool but it's not portable to MacOs or aarm64 yet, and that kind of portability unfortunately is what appeals to me about an embeddable scripting language.
  • truekonrads 13 hours ago
    I took away from benchmarks how fast Luau is...
  • kiririn 1 day ago
    Nice, gives me Pawn vibes

    (https://www.compuphase.com/pawn/pawn.htm)

  • capyba 1 day ago
    In terms of performance, how does it compare to compiled languages like the C it’s written in?
  • eulgro 1 day ago
    The question I ask myself when I see this kind of project is: how long are you willing to maintain it for?

    My main concern about a new language is not performance, syntax, or features, but long term support and community.

    • brabel 1 day ago
      The only way to have any idea of how long a language might be still around is to look at how long it's been already around. From this perspective , you can only use older languages. The benchmarks show that Lua (and the Luau and Lua+JIT variants) is actually very competitive, so I'd stick with one of those.
    • 01HNNWZ0MV43FF 1 day ago
      In the end, weight is a kind of strength, and popularity is a kind of quality. It looks promising but you can't expect long-term support until there's more contributors and users

      At this point it is too early to know. Even JavaScript took like 20 years to catch on

  • grodriguez100 1 day ago
    Sounds very good, and I can see many use cases in embedded systems. But that probably requires 32-bit arm support. Is that planned ?
    • beariish 1 day ago
      As of right now no - my primary target when developing this was realtime and games in particular since that's what I know best, but if there's a real target in embedded that's certainly something that could be explored.
      • grodriguez100 19 hours ago
        Looks like I misinterpreted both embedded and real-time then.
  • Forgret 1 day ago
    It looks cool, I wish you luck in developing the language. I liked your language and I hope it becomes popular someday.
  • Vandash 1 day ago
    game dev for 15+ years here, love the first example on Github this is compiled right? cannot replace lua?
    • beariish 1 day ago
      Bolt is not compiled ahead of time, it's bytecode interpreted just like Lua
    • IshKebab 1 day ago
      It's compiled in the same way that Lua is compiled. So yes, it can replace Lua.
  • acron0 1 day ago
    If I was still writing games I would be alllllll over this
  • thrance 1 day ago
    Function return type inference is funny but I don't think it's that great of a feature. It makes it harder for a library's consumer to know how to properly use a function, and it also makes it harder for the maintainer to not break backwards compatibility inadvertently. Anyway, I'm all for experimenting.
    • beariish 1 day ago
      There's nothing stopping a library author from explicitly annotating return types wherever a stable interface is important, the idea is more for smaller functions or callbacks to make use of this. Perhaps I'll make the examples clearer to reflect the intention.
      • flakes 1 day ago
        Perhaps it makes more sense to say that exported function interfaces be explicit. That forces you to document the api and more carefully consider changes.
    • jitl 1 day ago
      It’s great in TypeScript. In TypeScript your source can have inferred types returned, and then use a build step to produce resolved typings files (.d.ts) for distribution that have the fully specified type.
      • thrance 1 day ago
        Ah, true, now that you mention it. I feel like it's the best of both worlds, you get type inference and it gets documented automatically.
  • k__ 1 day ago
    Awesome.

    Is it deterministic like Lua?

  • boffinAudio 16 hours ago
    Very cute language ..

         match x {
            is table {
                print(indent_this + "{")
                for pair in pairs(x) {
                    write(indent + "   " + to_string(pair.key) + ": ") 
                    pretty_print_internal(pair.value, indent + "   ", false)
                }
                print(indent + "}")
            }
    
    .. okay, you got my attention (long-time Lua fan) .. will have to give this some workbench time and learn it ..
  • amai 18 hours ago
    [flagged]