This is super creative. For those who like to reduce their experiences of fun cool stuff, I’ll describe it: Implement a line drawing-on-2D algorithm that maintains geometry on a particular 2d view. Run each of your source images for a Gaussian splat through the line drawing tool. Use those 2d line images to make your splat.
Result: a 3d scene that can be posed and shows as a 2d line illustration.
I like a lot of things about this, but mostly I like the facility demonstrated here, and the experimentation. So many interesting things to do are in hobbyist reach right now, it’s kind of breathtaking.
I was dazzled with the drawing itself. Then by accident I discovered you can zoom in and out too. And on top of that - you can also rotate 360 degrees around the object.
Too far out of my field for me to understand how impressed I should be - but I am impressed.
Informative-Drawings already has monocular depth estimation built in--that's why its line results are so beautifully consistent. But without this extra step combining results from multiple camera positions, you get 2.5D geometry, not 3D.
3D guassian splatting might supplant polygonal 3D for many things. At least for 3D scanned scenes it might make sense. For synthetic scenes it might make sense as well. Very interesting technology! I do a lot with drone photogrammetry, I'm keeping an eye on this tech.
So far using OpenDroneMap. Make sure to use the non-default planar mode for much better reliability in the OpenSFM phase. Also experimenting with Colmap, which requires CUDA (with OpenDroneMap CUDA is optional; they support CPU-only as well as GPU I believe)