@Matt_StableLab Thanks for taking the initiative once again.
TL;DR
In this post, I explain why Euler shouldn’t prioritize tokenomics. For v2, it’s suggested to allocate incentives towards (1) insurance fund, (2) grants and governance, and (3) ad-hoc liquidity at re-launch.
Abstract
I’d like to share my skepticism regarding making a focus on tokenomics, especially at the current stage that Euler is undergoing. While I generally agree that a token is a great tool to prioritize incentives, grow the ecosystem, and decentralize a protocol, making an explicit focus on a tangled complex token model (1) distracts from the main product and (2) leads to inefficient capital allocation (e.g., revenue sharing, buyback-and-burn) instead of redirecting these resources to other activities that have higher ROI for project growth.
One may say that making token “attractive” (essentially pumping the price) can build a community, i.e. gud price → gud project → more users, however when the market goes in another direction (and it eventually does) – it’s a dead end. I don’t really believe that passive token holders who are mostly mercenaries and free riders bring any sustainable value to a project. Why think about the interests of an obscure token holder if the resources can be allocated to build a new product, provide a grant, hire a dev, etc?
Thankfully, we have the opportunity to look back at the past 3y of experience and conclude that most of these fancy tokenomics are a byproduct of DeFi 2.0 where not having enough fundamentals is compensated by a ponzi-like financial game to get attention and enrich early participants that eventually yield nothing long-term.
General principles
Nevertheless, a token is a powerful tool if designed and applied properly. When redesigning a token, I would suggest following such principles:
-
Postpone token launch as much as possible. The best approach is to find a PMF and then launch a token to scale. Not really relevant to Euler because there is a token already. But considering the re-launch and potential tokenomics reset I’d suggest not overcomplicating the token model;
-
Define the key protocol metrics/goals that the token should drive especially if it’s liquidity targeting
For Euler, I’d say these are:
-
Incentives towards risk minimization, e.g. Insurance fund. Growing and promoting TVL not having any recourse and backstop is fatal. Elaborated on this here;
-
Grants and governance. With grants, it’s pretty obvious that allocating resources to ecosystem growth has a very good ROI. This is hardly tokenomics per se, but just sound treasury management. As for the (on-chain) governance, a lot of people faded this narrative after a number of disappointments like Aragon and RFV riders, fei and rari drama, however, I’d say that the impact from community members and delegates at the mature projects is significant.
-
Ad-hoc incentives allocation at re-launch, new chain deployment, new collateral listing, etc. Not making this a long-lasting initiative; leave room to experiment and adapt.
- Not locking oneself into a rigid token model with little or no flexibility to adjust e.g, veTokenomic with 4y lockup;
- Not allocating incentives to gov token liquidity on DEXs. I guess no comments here;
- Avoid revenue sharing (will elaborate later on).
Models analysis
1. Revenue sharing
This is one of the hottest long-lasting discussion within the community, especially with the latest Lido proposal and the constant suspense of Uniswap fee switch. If at the beginning there were more proponents to share revenue because it seemed like a defi native revolutionary thing + validation from Curve, but now the general consensus leans towards the web2 model. Again, why allocate capital to the passive token holders where it’s hard to estimate ROI if it’s more efficient to reinvest in project growth?
Revenue sharing might make sense if: (1) the protocol is profitable and (2) there are no real opportunities to re-invest – which is not valid in both cases for defi projects. There are a few great latest analyses on this matter: 1, 2.
Yes, there are some contrary examples like Curve and GMX, however, I don’t think these models are really applicable.
2. Buyback-and-burn
The goal of this model also lies in token appreciation but via scarcity creation. So the general feedback for this model is similar to the previous one. But even if buying back, it’s better to allocate these funds to endeavors with the higher ROI.
3. VeTokenomics
This can be discussed for hours, however, I agree with @knightsemplar that it’s a very specific model that ultimately worked out ideally only for Curve and maybe for just a few projects more. There were many attempts to replicate this model for lending protocols and DOVs, but there was no success imo.
4. Utility
There were good examples provided by @Matt_StableLab that a token can be a tool for UX enhancement, however, I still don’t think that the token requires so much attention for not 80/20 moves right now.
Summary
As a general approach, I’d suggest not focusing a lot on complicated tokenomics to please token holders. It’s better to be the best protocol with the worst token like Uniswap, rather than have temporary clout as DeFi 2.0.
But considering that there is already a token that can add leverage at re-launch, I’d suggest considering the following ways to apply a token:
- Insurance fund, backstop, or any form of recourse to mitigate and/or transfer the insolvency risk (e.g., safety module). Also, such an attitude might bring back trust in a protocol more swiftly;
- Allocate funds to grant programs and governance/delegation;
- Ad-hoc liquidity incentives at the re-launch or deployment on new chains.