Bitruncated {4,4,4} Flythrough

Bitruncated {4,4,4} Flythrough

What's the natural next step after spherical images? Spherical animations flying through the honeycombs! This is fun to watch using the YouTube app on your phone, where you can pan by moving your phone around.

http://youtu.be/Y6ZI1kxZoFU

John Baez suggested trying this particular honeycomb, as well as reversing the coloring. I really like the bitruncated honeycombs, especially those derived from self-dual symmetry groups because they end up nearly regular, sometimes even completely regular.

Like the honeycomb of buckyballs I've posted about, this one is cell-transitive, edge-transitive, and vertex-transitive, meaning you can take any of these elements to any other of the same type and preserve the overall honeycomb symmetry. However, faces are nontransitive since there are squares and octagons. Compare with these other bitruncations, which are also transitive for all elements but faces.

Bitruncated {5,3,5}
http://en.wikipedia.org/wiki/Order-5_dodecahedral_honeycomb#Bitruncated_order-5_dodecahedral_honeycomb

Bitruncated {3,6,3}
http://en.wikipedia.org/wiki/Triangular_tiling_honeycomb#Bitruncated_triangular_tiling_honeycomb

Bitruncated {3,4,3} (a spherical honeycomb)
https://en.wikipedia.org/wiki/Truncated_24-cells#Bitruncated_24-cell

What's the natural next step after this? ...to allow controlling the flythrough of honeycombs in real-time, and with VR stereo. Andrea Hawksley, Vi Hart, Henry Segerman, and Mike Stay allow you do just that with a very cool online app, but I would like ultimately to see something supporting the level of detail achievable with slow-rendered video like this, especially for honeycombs with paracompact cells.

http://hawksley.github.io/hypVR/

More Links

EleVR blog post about the online app:
http://elevr.com/portfolio/hyperbolic-vr/

Spherical stills:
http://plus.google.com/u/0/+RoiceNelson/posts/ATixjkgruk3

My post on a honeycomb of buckyballs:
http://plus.google.com/+RoiceNelson/posts/ficnsDN75rd

Implementation Details

Unfortunately, this wasn't particularly easy to produce because I had to generate the full geometry for each frame. I chose to do 1M edges, which results in a 500MB POV-Ray definition file for each still. I generated all those files first, which took about 12 hours and filled up my drive! It took the rest of the weekend to render the 625 stills that comprise this video, at 4k by 2k pixel resolution. I used ffmpeg to compose those into a video, then to repeat that output three times (I chose a flythrough path that looped back to its starting point). The final video was 2.7 GB.

I can (and did) generate low-res test runs. Those take about 15 minutes (1k edges, 800x400 pixel stills), but even that is a bit slow of an iteration loop when trying to find a good flythrough path, etc. I'm sure I could have come up with a better path with more time, but I got impatient. I wish I had a rendering farm :)
https://youtu.be/Y6ZI1kxZoFU

Comments

  1. Why not use an implicit representation in order to avoid to generate the full geometry? for example distance estimation. I guess it is possible to do it with POV-Ray... something like this script: http://news.povray.org/povray.text.scene-files/attachment/%3C4cff1d96%40news.povray.org%3E/kifse.pov.txt

    news.povray.org - kIFSe.pov 2010 Samuel Benge With this scene file you can explore ...

    ReplyDelete
  2. Thank you Abdelaziz Nait Merzouk. I've wanted to experiment with Fragmentarium, but have not made the time yet. I didn't know that POV-Ray could do IFS type renderings, so good to know it can.

    Is an implicit representation how you rendered your background image here? (G+ doesn't seem to want to let me link to the actual image.)

    https://plus.google.com/114982179961753756261

    That is really beautifully executed! You've rendered it so much more effectively, with the lighting and a deep level of recursion. It's wonderful. How long does an image like that take to render?

    ReplyDelete
  3. Yes, an implicit representation that gives a distance estimate to the object is used. It is almost exactly the same technique as for the 2D tilings. The distance estimate is then used for ray-marching.

    That picture took something like 2 min on an Nvidia 650 GPU. It takes that time because I used approximately 200 subframes in order to remove noise and get better antialiasing. lower quality renders are done interactively.
    Here is a long discussion while scrabbling about it :-) : http://www.fractalforums.com/fragmentarium/solids-many-many-solids/

    The POV-Ray script is not from me. I have very little knowledge of POV if any. It is not about plain IFS but the so called Kaleidoscopic-IFS... but IIRC, it is possible to do IFS with POV scripts by generating on the fly a lot of little spheres or boxes...

    fractalforums.com - Polyhedrons, many many polyhedrons...

    ReplyDelete

Post a Comment

Popular posts from this blog

Hyperbolic Hopf Fibrations