🎉 The #CandyDrop Futures Challenge is live — join now to share a 6 BTC prize pool!
📢 Post your futures trading experience on Gate Square with the event hashtag — $25 × 20 rewards are waiting!
🎁 $500 in futures trial vouchers up for grabs — 20 standout posts will win!
📅 Event Period: August 1, 2025, 15:00 – August 15, 2025, 19:00 (UTC+8)
👉 Event Link: https://www.gate.com/candy-drop/detail/BTC-98
Dare to trade. Dare to win.
OP Stack Innovators Dialogue: How Plasma Mode Will Change the Future of Chain Games
DEVS ON DEVS: A Conversation Between TDOT and BEN JONES
In this special episode of Devs on Devs, we invited tdot(, the core protocol developer of Plasma Mode and also a developer of Redstone ), along with Ben Jones, the co-founder of Optimism. Optimism is a key driver of the OP Stack. Plasma Mode allows developers to build on the OP Stack without needing to publish data to L1, but rather can flexibly switch to off-chain data providers, saving costs and improving scalability. In this conversation, they discussed the origins of the collaboration between Redstone and Optimism, the importance of reviving Plasma, the necessity of bringing experimental protocols into production, the future roadmap of Plasma Mode and the OP Stack, and their excitement about the development of the full-chain gaming sector.
01. How to Improve OP Stack Using Plasma Mode
Ben: What is the process of improving the OP Stack like?
tdot: I joined Lattice about a year ago, specifically responsible for Plasma Mode. The goal is very clear: we have many MUD applications that consume a lot of gas, and at the same time, we are trying to put a large amount of data on-chain, so we need a solution that supports these needs while being cost-effective. The Lattice team has already done some experiments on the OP Stack, such as prototyping some on-chain worlds and deploying them on the OP Stack. We found that the OP Stack is already very usable.
So we asked ourselves, "How can we make it cheaper?" The basic assumption is, "We believe the OP Stack is the framework that aligns best with the Ethereum philosophy and is fully compatible with EVM." What runs on the mainnet can also run on the OP Stack, which is the ideal solution. But we want it to be cheaper.
At that time, calldata was still the data availability of the OP Stack chain (DA), which was very expensive. So we clearly couldn't launch an L2 using calldata, as our full-chain games and MUD worlds required higher throughput. Therefore, we decided to start exploring other data availability (Alt DA) solutions. In fact, it was already mentioned in the initial OP Stack documentation to explore Alt DA.
So we asked ourselves, "What would happen if we started from off-chain DA?" We hope that the entire security model and everything can rely on L1 Ethereum. Therefore, we avoided other Alt DA solutions and decided to store the data in centralized DA storage, and then find an effective security model on L1.
This is why we want to reuse some old Plasma concepts and place them on top of rollups. There are some differences here. The biggest question is, how to implement off-chain DA and on-chain data challenges on the existing OP Stack? Our goal is to make as few changes as possible to the OP Stack, with no impact on the rollup path, as we do not want to affect the security of other rollup chains using the OP Stack.
When designing a rollup, you wouldn't think, "What happens if someone changes the data generation process to store data elsewhere?" Even with these changes, the OP Stack remains very powerful and works well out of the box. This is the first change we made.
After that, we need to write contracts to create these challenges. There are DA challenges that forcefully put data on-chain. This is the second step, integrating the contracts into the process. We must build the entire integration system during the derivation process so that you can derive data from an off-chain DA source as well as an L1 DA challenge contract, in case the data is submitted on-chain during the challenge resolution process.
This is the crux of the matter. It’s complex because we want to keep things elegant and robust. At the same time, it’s a relatively simple concept. We are not trying to reinvent everything or change the entire OP Stack, but rather trying to keep things simple in a complex environment. So overall, this has been a very cool engineering journey.
Ben: I can talk from the perspective of OP. You mentioned some of Lattice's early work. Coincidentally, at the same time, we at Optimism almost did an end-to-end rewrite of the entire OP Stack, which we call this release Bedrock.
Basically, after building the rollup for two years, we took a step back and reflected, saying: "Well, if we were to maximize all the experiences we've learned, what would that look like?" This evolved into what ultimately became known as the Bedrock codebase, which is the largest upgrade we've made to the network.
At that time, we collaborated with you on a project called OPCraft, and I believe Biomes is its spiritual successor. This was the most enjoyable time we had playing on-chain. At the same time, we also breathed a sigh of relief because others could also use OP Stack for development. I think another important turning point for scalability in the past few years is that many people can run chains.
It's not just those who have developed large and complex codebases who can do this. When we started collaborating, seeing others able to take over this codebase and do some really amazing things was a great affirmation. Then seeing this situation scale to Plasma in practical applications was really cool. I can even talk a bit about that history.
Before Optimism became Optimism, we were actually researching a technology called Plasma. At that time, the task we undertook far exceeded the capacity of the scalability community. The design you see in the early Plasma designs may not have a direct correspondence with today’s Plasma.
Today's Plasma is much simpler. We separate the proof and challenge of state validation from the challenge of data. Ultimately, we recognized a few years ago that Rollups are much simpler than Plasma. I believe the community's conclusion at that time was "Plasma is dead." This was a meme from that period in Ethereum's scaling history.
But we have always believed that "Plasma is not dead, it's just that we can first try a simpler task." Now we use different terminology. For example, at that time there were concepts like exits(, and now you can look back and say, "Oh, that was a data availability challenge with some extra steps." So it's amazing to see that not only is the OP Stack being used by others, but it has also evolved into something we originally attempted but in a very chaotic and immature abstract way. We have completed a full loop, and you have done a great job of abstracting around it and making it work in a reasonable and sensible way. That's really cool.
02. The most important thing is to enter the production environment as soon as possible.
tdot: The Plasma model still faces some challenges and unresolved issues that we are working hard to address. The key is how to avoid taking up to ten years? You know what I mean, right? We need to reach a stage where we can deliver results as soon as possible.
This is our idea. We already have many applications based on MUD that we want to launch on the mainnet immediately. We need to prepare a mainnet for these games as soon as possible. People are already waiting and ready. You need a chain that can be quickly launched and operational to run all these applications so that they can develop in parallel and improve while we solve the issues. It takes a long time from R&D to achieve production stability.
To launch something on the mainnet, making it permissionless, robust, and secure, requires a significant amount of time. It is already quite amazing to see the entire process of us achieving this goal. That's why we need to maintain a high level of agility, as there is too much going on. The entire ecosystem is developing very quickly. I think everyone is delivering a lot of innovations. That’s why you have to keep up, but you also cannot compromise on security and performance; otherwise, the system will not function.
Ben: Or it could be called a technical burden. The principle of minimal changes that you mentioned is one of the core ideas behind our Bedrock rewrite. I talked about the entire end-to-end rewrite, but more importantly, we reduced about 50,000 lines of code, which is very powerful in itself. Because you are right, these things are indeed quite challenging.
Every additional line of code takes you further away from the production environment, making it harder for things to be tested in real-world scenarios and introducing more opportunities for errors. Therefore, we are very grateful for all your efforts in driving this process, especially for the contributions made to the new operating model of the OP Stack.
tdot: The OP Stack has indeed created a way for you to quickly move forward on such matters. Coordinating everyone is very difficult, as we are obviously two different companies. At Lattice, we are building a game, a game engine, and a chain.
And you are building hundreds and thousands of things, delivering all these products on a regular basis. From a coordination perspective, this is indeed very challenging.
Ben: Yes, there is indeed a long way to go. But that is precisely the core appeal of modularity. For me, from the perspective of the OP Stack, this is one of the most exciting things, not to mention the amazing games and virtual worlds being built on Redstone right now. Purely from the perspective of the OP Stack, this is a very powerful example that proves many excellent core developers have joined in and improved this stack, which is truly remarkable.
This is the first time you can significantly change the properties of the system with a key boolean value. As you said, there is indeed a long way to go to achieve this completely. But even getting close to doing this effectively requires modular support, right? For us, it is a relief to see you achieve this without needing to, for example, rewrite L2 Geth. For me, this proves that modularization is working.
tdot: The situation has improved now. From this example, you have turned everything into independent small modules that can be adjusted and have their attributes changed. So I am very much looking forward to seeing what new features will be integrated. I remember we were once concerned that we had a fork containing all the changes to the OP Stack that needed to be merged into the main branch. At that time, we thought, "Oh my God, reviewing everything would be crazy."
We had to break it down into smaller parts, but the whole process went very smoothly. The collaboration atmosphere with the team was great, so the review process was also pleasant. It felt very natural. And I think the process was very fast in reviewing and addressing some potential issues. Everything went unexpectedly smoothly.
Ben: This is really great. One of our focuses this year is to create contribution pathways for the OP Stack. So I really appreciate your participation in testing and advancing these processes. I'm glad these processes haven't been overwhelming, and we've achieved some results. Speaking of which, I'm curious, from your perspective, how do you see this work evolving next? What are you most looking forward to developing next?
tdot: There are many different work directions. The main focus is on the integration with the fault-proof mechanism. We adopt an incremental approach to decentralize the entire tech stack and enhance its permissionless features, with the ultimate goal of achieving functions such as permissionless and forced exit.
We have this ultimate goal, and we are gradually achieving it while maintaining security. One challenge is that sometimes not going live on the mainnet can be easier because it avoids the need for hard forks. You might think, "Oh, I'll just wait until everything is fully ready to release, so there won't be a need for hard forks, and there won't be a technical burden." However, if you want to quickly launch the mainnet, you have to deal with these complex upgrades and release frequently. Achieving this while maintaining high availability is always a challenge.
I believe that there will be many upgrades in the Plasma model aspect once the fault-proof mechanisms and all these parts are ready. I think there is still some room for optimization in batch submission of commitments. Right now, we do it very simply, one commitment per transaction. And a commitment is just the hash of the input data stored off-chain.
We will keep it as simple as possible for now, so that the review can be straightforward and quick, and there won’t be much difference for the OP Stack. However, there are some optimizations now that can make it cheaper, such as batching the commitments or submitting them to a blob, or using other different methods. So we will definitely look into this to reduce the costs of L1.
This is something we are very excited about. Of course, we are also looking forward to all the upcoming interoperability-related content and being able to interact across all chains. Figuring this out will be a huge advancement for users.
Many of these tasks will definitely need to be implemented by you. However, we hope to clarify what they look like under the Plasma model and with different security assumptions.
Ben: