So briefly: all the poems are pre-rendered and packed into a texture. We render the scene into an off-screen buffer, storing the texture coordinates of the poems in the buffer (each model in the scene has a different poem). Finally we render to the screen, looking up the poems in the texture using the intermediate buffer. We make sure we sample the buffer at a resolution that corresponds to the number of character rows and columns, so that we only render whole characters. Hope that makes sense!