Tuesday 19 March 2013

Using Blender with Google's "Photo Sphere" to Easily Create, Share 3-D Renders

Google's "Photo Sphere" for Android's camera is a neat feature that brings the ability to create 3-D photos to everyday mobile devices.  Why not use all the web applications available to easily create, share 3-D renders made in Blender?  Try it out: Cityscape and Basic Scene.

Here is a nice overview of Photo Sphere if you haven't seen it yet.

A rendered Photo Sphere of a Cityscape.  I found this scene on Blendswap by Dimmyxv.
Photo Spheres allow the photographer to take a 360 degree photo like a panorama except that it also pans up/down creating a full sphere around the user.  The Photo Sphere application typically saves these series of photos as a single Equirectangular projection to capture an entire sphere.  The technology has been around for a long time, but until recently I've never seen such an easy way to create/share this type of medium with others.  Today, 3-D scenes are most likely done in 2-D; not all scenes should be photo spheres but a lot could benefit from a full user immersion.   The process for an Android phone (e.g. Nexus 4) is extremely easy, but for Blender it isn't so straightforward.  This post will help walk you through that process:
  1. Exporting photo spheres to an image.
  2. Uploading a rendered photo sphere to Google+.
  3. Downloading photo sphere from Google+.
  4. Importing photo spheres in Blender.
  5. Future thoughts.

Exporting Photo Spheres

I only know of two ways of doing this: baking a texture using reflection and using the Equirectangular Camera.  The first way can only be done using Blender's internal render engine and second can only be done using cycles.

Scene Setup

To change the perspective: with the camera object selected, turn on "Properties Panel -> Object Data -> Lens -> Panoramic".  The default type is FishEye, but change it to Equirectangular.


Move the camera to a good location like the center (tip: use Alt+G and Alt+R to quickly clear the location/rotation).  The camera needs to be at eye level; I just choose an arbitrary number like 2 meters (1 blender unit is 1 meter).  To do this in feet/meters, just go to "Properties -> Scene -> Units".  After adding a subject around your camera, you should have a scene that looks something like this:

Render the Scene

Switching to Camera View (numpad 0) and to Render Shading will now show what the final flat photo sphere image is going to look like:

Uploading Rendered Photo Spheres

Initially I tried to match the Nexus 4's image by setting the resolution to 2811x1118.  It turns out the size of the viewport (changing the size of your browser) affects distortion a great deal.  I suspect to get this more user friendly would be some development on how Google+ is transforming the image to 2-D based on page size.  Also, there seem to be artifacts while rotating now and then; these go away if you zoom in or refresh the page.

For Google+ to accept a photo as a photo sphere, it needs some specific XMP info encoded in the file.  If you don't know how to add this yourself, Google provides an online converter for free (typically used for Google Earth/Maps/Street View).  I found that PNG files won't work (the download comes back as 0 bytes in size) but JPG files work fine.  The converter will ask for compass heading, horizontal FOV, and vertical FOV; make sure to set the vertical FOV to 180 and horizontal FOV to 360.

Ok, now you have your image with the XMP data. Simply upload it to Google+ and it will automatically detect that it as a photo sphere.  Try it out: Cityscape and Basic Scene.

Downloading Photo Spheres From Google+

For Google+, just go to the photo you want to download - like this.  There should be a download link at the bottom left  "Options -> Download Full Size".

Importing Photo Spheres in Blender

For cycles, go to "Properties -> World" and and change the surface to an "Environment Texture" (you might need to enable nodes).  Open the image you want to be the background.  You can change the projection to either Equirectangular or Mirror Ball depending on the type of photo you have.

Background set to "Environment Texture" 

Future Thoughts

Equirectangular Video Player

Google+, Photosynth, and others sites typically just handle 2-D photos.  It would be neat to make an "Equirectangular Video Player" cross platform and easy to use - if it were me, I would probably write it in JavaScript and WebGL.  Players exist today, but aren't what they should be... it is definitely possible (e.g. Kolor Eyes and krpano) to render a series of equirectangular photos to video today. Ultimately, it should be easily accessible to everyone much like YouTube or Vimeo and have a polished look/feel like Street View.

Emerging Technology

These types of 3-D images/videos are useful to others and me because of emerging technology increasing in performance, much like the Oculus Rift.  Watching an Equirectangular Video on a 2-D screen makes it hard to understand what is going on unless you can move the screen around with your head.



Friday 22 February 2013

Thursday 21 February 2013

Blender Tutorial: Binary Double Helix

Combining a mesh of a few vertices and multitude of modifiers (mirror, array, deform, and particle), I was able to create a neat looking DNA strand out of 1's and 0's - a Binary Double Helix or Binary Binary Helix?  This tutorial assumes a lot about your knowledge of Blender.  You should probably be familiar with modifiers, particle system, and nodes.



There are two ways of achieving this that I know of: 1) Using a Screw Modifier 2) Using a Deform Modifier.  There is already a decent video about using a Deform Modifier to create a helix so I won't post another one.  Although, I wouldn't use the shapes that the video suggests but instead use a mirror modifier to create mirrored sides along with an array to create each rung.  I stacked them like this:

  1. mirror - for duplicating the other side
  2. array - creates the individual rungs
  3. deform - twists the strand around the origin
  4. array - creates multiple sections...be really cool to set this up in 3's like DNA should be ;)
  5. sub surf (if you need it, like if you're using cylinders instead of particles)
Should look something like this photo (the empty box is used for the deform modifier's origin):
After creating the helix, make some 1's and 0's on another layer.  I used a box and a circle mesh to start with because the text meshes tend to have a lot of vertices.  Then add them to a group to be used with the particle system.  In the particle system (also shown in the image below) just get the particles working on a single section; because, currently (Blender 2.66), the modifiers won't work with the particle system until you apply them later.

  1. In the Emission section, turn on Vertex and turn off Random.
  2. In Render section, turn on Unborn.
  3. Turn off the emitter so the vertices or faces don't show up. Unless you want this.
  4. Select the new group of 1's and 0's you made.
  5. Turn on the Pick Random.
You might have to play with the rotations of the 1's and 0's a bit (pun intended).  Also, use the subdivide to create more vertices and space out the particles how you want.  I made the mistake here of only using 7 bits inside as the rung.  You probably want some multiple of 2 like 8 bits to add that extra coolness or to write something in ASCII or other encoding.

For the cool glow effect I used these two videos (using both blur and glare filters):
  1. Blender 2.64 Tutorial: Advanced Particle Trail in Cycles
  2. Blender Tutorial: Create a Spaceship Corridor in Blender - Part 2 of 2
Here is a .blend file to help you get started if you need it: dna_binary_1.2.blend