A dApp Developer’s Perspective on Rue v. Chialisp
Can one catch up to the other?
One of Chia’s biggest advantages over Bitcoin is its programmability: each coin’s logic is expressed as a set of ChiaLisp Virtual Machine (CLVM) instructions, allowing Chia coins to operate in ways that turn them into vaults, tokens (CATs/NFTs), and even dApps (e.g., TibetSwap). That said, CLVM is more like assembly code - designed for machines to understand first, then for humans, which is why higher-order programming languages such as Chialisp are used to code puzzles.
Last year, I was extremely excited to hear about the launch of Rue, a new programming language for Chia puzzles. Like Chialisp, Rue has a compiler that transforms ‘human-friendly’ code into CLVM bytecode, which is what gets executed when coins are spent on Chia. If you’re interested in more of the inner workings of the compiler, Rigidity’s blog has some great articles on the topic.
I am always looking to improve the quality (and timeline) of my Chia projects, so I decided to give Rue a try for the three main initiatives that I was working on: the Reward Distributor, XCHandles, and CATalog. All of them were already pretty far along, which allowed me to compare the existing Chialisp code directly to its Rue equivalent. Below, I (quite subjectively) compare the two languages in terms of the features I consider to be the most important: efficiency, developer experience, and security. While I tried noting the biggest nuances/caveats in the differences, I believe it’s best to take this with a grain of salt and form your own opinion - let me know your take in the comments!
Efficiency
It’s usually difficult to quantify the efficiency of programs in a way that captures all dimensions, but, thankfully, the Chia blockchain provides a ‘standard’ way through execution cost. Put simply, cost quantifies the computational resources - storage, execution, and changes to the blockchain - required to spend some coins. It is synonymous with a transaction’s size, and has a direct role in determining which transactions are prioritized (farmers essentially try to maximize fee-per-cost when building blocks). You can learn more about cost here and here.
As I had been working on the Reward Distributor/XCHandles/CATalog for a while, all projects already had tests covering all their features set up. In fact, one of the classes I added to the Chia wallet SDK is the benchmarker, which allows developers to easily label spends sent to the simulator during tests and later see statistics about spends with the same labels - to be more specific, their number and average/median/maximum/minimum costs. I added Benchmark specifically for action layer dApps, as I was curious about the throughput they support - the number of interactions in a block is inversely related to the cost of said interactions.
Here, I think the numbers speak for themselves. For each tested action (except setup, which was used as a control), bytecode generated by the Rue compiler had less cost than the Chialisp-generated equivalent. To quantify, all values were in the 93.13%-98.21% range, with an average relative cost of 95.91% and a median of 95.92%. Put another way, on average, using Rue saved 4.09% of the final bundle’s size, and in all examples Rue generated less costly (‘more efficient’) programs than Chialisp. The full data can be found here.
While 4.09% may seem like a low number, keep in mind that this cost difference is for fundamentally equivalent puzzles. Users of these dApps would pay, on average, 4.09% less in fees under fee pressure conditions. To me, this is an impressive improvement, as the Chialisp puzzles used for comparison have gone through multiple rounds of (sometimes esoteric) optimizations. What’s more, testing bundles were made to be similar to user interactions, which means they contain adjacent coin operations such as NFT transfers and offers - this makes relative cost decreases look worse than they actually are. If we are to believe the official cost estimates table, each spend bundle had almost the equivalent of one standard transaction removed from it in terms of cost (on average).
So, where does the cost difference come from? The Rue and Chialisp puzzles I compared are functionally equivalent, so they will have the same condition cost, as they output the same conditions. Thus, the difference comes from a mix of storage and execution cost. While future compression initiatives will minimize cost due to on-chain storage for puzzles, I can confidently say that, right now, a puzzle’s size makes up for a majority of its cost, and I predict that will be the case in the short and medium terms as well. A nice property of size is that you can record it directly from puzzles without any noise (coming, for example, from other puzzles running in the same spend bundle).

The Rue compiler removed 10 to 231 bytes from each of the 39 puzzles used in this small experiment. As a percentage, the range is 6.08% to 29.59%. The average number of bytes saved per puzzle is 110.72 (14.84%), and the median is 117 bytes (13.58%). You can find the raw data on the ‘Bytes’ tab of the spreadsheet linked above.
The regression line in the graph above suggests that the savings from switching to Rue can be approximated quite well using the equation of a line. There is an absolute saving, which I believe comes from Rue’s implementation of a frequently used Chialisp function, curry_tree_hash. Rue’s standard library uses an implementation that is significantly smaller but has a higher runtime cost, while the version used in Chialisp prioritizes runtime at the expense of a much larger storage footprint. An absolute saving of 32.2 bytes (387.600 cost) is indeed very close to the differences observed in the Chialisp vs. Rue curry benchmarks.
The slope of the regression line suggests that the Rue compiler also has a proportional advantage: each byte produced by Chialisp ‘corresponds’ to 0.9 bytes produced by Rue. This number likely comes from a few of the compiler optimizations Rue has that Chialisp hasn’t implemented, such as:
Type checking (‘is’) simplification: If the type of a checked variable is already known, the check will be evaluated and replaced with the resulting boolean at compile time
Symbol grouping: If multiple bindings (‘let’) are declared in the same code branch, the compiler tries to group definitions together to minimize the number of apply operators used in the resulting bytecode
Symbol inlining: If a variable or constant is used a single time, its value will be inlined. Similarly, if a function is called a single time and its arguments are used a single time as well, it will be inlined automatically.
Superpaths: The compiler detects when two or more references being concatenated could be replaced by a reference to an intermediary tree node (e.g.,
(c 2 3) -> 5) (hey, I suggested this one!)Binary tree arguments: The compiler structures the arguments of some functions as trees instead of lists, which makes referencing them take fewer bytes in some cases (hey, I suggested this one as well!)
Verdict: Rue > Chialisp, for now. There is no magical reason why the Chialisp compiler can’t implement Rue’s optimizations, but so far it hasn’t.
Note: All the puzzles in this benchmark use the ‘standard’ version of Chialisp for efficiency purposes. Puzzles in modern Chialisp, while easier to write due to QoL improvements such as ‘let,’ tend to take up a lot more bytes. There are plans to adapt the normal optimizer for modern Chialisp as well once it is ‘properly’ released; for this comparison, I used using the version that generates the most efficient puzzles.
Developer Experience

I believe one of Rue’s biggest advantages is readability - the ability of another developer to scan the code and understand what it does easily. Not only does this make collaboration and audits faster, but it also accelerates the development process itself. Particularly, I find Chialisp’s parenthesis structure to take some cognitive load during development, which Rue eliminates through the use of Rust-like syntax. Another thing I had to keep in mind during the experiment is that ‘standard’ Chialisp does not have let, but that is fixed in modern Chialisp, so I won’t take it as a factor.
Speaking of cognitive load, formatting code is also something that could be automated. This is not available for any of the two languages, but brings us to another category: tooling. Both languages have syntax highlighting, but I found the Chialisp extension to be extremely unreliable while modifying the code live (to the point where I just disable it since it’s better than seeing some portions highlighted while others aren’t). Rue, on the other hand, has an extension that works great and is mostly stable (works consistently, with only rare, non-reproducible edge cases). Another nice feature of Rue is that you can write test functions in the same file with Rue code - this is similar to what foundry does, and I expect it will become a very popular method of testing functions. Still, because the 3 apps I used were initially written in Chialisp (which does not have such features out-of-the-box), the benchmark code was written in the wallet SDK, which has greatly improved developer experience on Chia but is not part of the Rue language itself. One integration worth mentioning is compile_rue!, which allows developers to test the latest version of their code without having to compile, then copy the bytecode and hash during development (it’s similar to fast refresh for Next - change the code, and, the next time you run your test, the new bytecode/hash will be used).
Lastly, this post wouldn’t be complete if this section didn’t talk a bit about AI. In general, I am opposed to using AI as the primary way of writing on-chain code, which is responsible for managing user funds - if we want this industry to be taken seriously, we should treat critical code much more carefully than we treat UI code controlling how round an edge is. That said, I am all for using AI to complement reviews/audits/bug bounties/contests/etc. Generally, AI models were able to generate Rue completions very well, with the worst suggestions being in the places where they needed to rely heavily on Chia blockchain specifics (e.g., output conditions such as SEND_MESSAGE). For Chialisp, my usual workflow is to disable suggestions entirely - even with many puzzles in the codebase giving the models some of the syntax, suggestions are still terrible, and many times models try to use LISP operators. Ultimately, I think you can finetune models to be better for either language, but their current comprehension and ability to suggest good completions indicate that Rue may be much more ‘intuitive’ for programmers transitioning to Chia on-chain programming. That is, model performance on completions is a good proxy for how beginner friendly a programming language is.
Verdict: Rue >> Chialisp. Rue’s readability and beginner-friendliness cannot be easily copied by Chialisp, unlike its nice chia-wallet-sdk integration and testing suite.
Security
When it comes to on-chain programs, security is, in my opinion, the most important consideration. An easy way to compare languages is to ask, “How used is this in production?” Here, Chialisp takes the lead, but I predict Rue will quickly catch up due to it beating Chialisp in the other two categories (and stl agrees, see #6 here).
Another aspect is how syntax makes developers think and consider attack scenarios. An example from other ecosystems is Solidity, where re-entrancy vulnerabilities are easy to introduce if you’re not aware of them. I think that the architecture of the Chia blockchain inherently makes developers consider side-effects the same way for both languages, as they’re are all determined in the output condition list (e.g., create a coin).
A more nuanced question to ask is, “How likely is it that the code that I’m reading is doing what I think it’s doing?” Here, I think Chialisp has an advantage, as it seems a lot closer to CLVM than Rue, especially when it comes to the type system. For example, Rue’s type checks (e.g., assert var is Bytes32) rely on the compiler’s inferred types, which are not always enforced strictly in the final bytecode (for the arguments of the main function, for example). While I understand why Rue avoids the storage/runtime overhead for these checks, not understanding how the compiler handles types may create issues. That said, I believe Rue’s type system prevents a lot of errors, some of which may result in security vulnerabilities - while not ideal, I am convinced this is better than Chialisp’s lack of a type system.
Lastly, I’d like to point out that both compilers are well-tested in some way or another, and, while there’s always the chance the compiler introduces vulnerabilities by mis-translating code to bytecode, I think the chances for both Rue and Chialisp are minimal given the teams’ attention to security and the precautions in place.
Verdict: Chialisp > Rue, for now. I believe the gap between Rue and Chialisp will mostly disappear as more puzzles written in Rue reach production deployment.
Conclusion
There are two types of differences between Rue and Chialisp. Some of them are addressable - although Rue has an upper hand despite being more recent, Chialisp tooling and its optimizer can be improved. Likewise, Rue can (and likely will) get more ‘real-world’ testing as the protocols using it go live and grow in TVL.
The remaining differences go deeper. In my mind, there is one root cause for them: Rue is ‘further away’ from CLVM than Chialisp is. This makes it beginner friendly and generally much easier to read, but introduces the risk that developers might misinterpret the behavior of the compiler in some places, as Rue’s compiler has to make more complex decisions.
Personally, I think Rue’s advantages currently make it a better choice than Chialisp, and I predict this will be the case in the short/medium-term as well. I’m confident that Rue is well-tested, and my code was ultimately reviewed by the creator to ensure no compiler ‘edge cases’ were triggered by the puzzles. Still, it remains as important as ever to follow the latest developments in Chia tooling and programming languages, which may tilt the balance quite rapidly.
Special thanks to Rigidity and stl for making suggestions on a draft of this article.

