Former OpenAI researcher releases Flipbook prototype: skip HTML, generate each pixel directly with AI video model

According to Beating Monitoring, former OpenAI researcher Zain Shah and his team released Flipbook, an experimental prototype that directly generates screen pixels using an AI model, replacing traditional web technologies like HTML and CSS. Every “page” users see is an AI-generated image; clicking any area of the image generates a new image to continue exploring. The entire interface has no HTML code, no fixed links, no predefined buttons, and even the text is pixelated within the image.

The video mode is based on the open-source DiT (Diffusion Transformer) video generation model LTX Studio from Israeli company Lightricks. After optimization, it can stream real-time 1080p 24fps video to users’ screens via WebSocket, with the backend powered by Modal Labs’ serverless GPU. Shah states that currently, Flipbook’s functionality is limited, and the team is designing around visual explanations, but it demonstrates a broader direction: as models become more accurate and stateful, it could expand to structured UI, including programming scenarios.

Shah previously worked on AI and robotics research at OpenAI, then served as a creative technology expert at Samsung, and is also a YC S13 alumnus. The team also includes former Humane and Slack engineer Eddie Jiao, and former Apple engineer Drew O’Carr.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin