How to build a sphere – its more difficult than you think!

Geodesic sphere from wikipedia

Gilgamesh

In my new geometry library Gilgamesh I have started with some real world examples from structural biology namely building solvent excluded and ball-and-stick representations of molecules. For the ball-and-stick models we need to have good low-polygon spheres as big complexes may have hundreds of thousands of atoms.

The sphere class allows you to add transformed and coloured spheres to a mesh. For examples, see basic_mesh and molecules. You can also supply your own lambda function to generate distorted versions of the sphere, for example for building models of moons.

https://github.com/andy-thomason/gilgamesh/blob/master/include/gilgamesh/shapes/sphere.hpp

Spheres have always been and interesting challenge to get right in a geometry library with common problems including uneven sized triangles and erratic mapping.

In the Octet OpenGL framework that I developed for my games students I had several geometric primitives such as spheres and cones. I wanted to present this more formally with a well balanced set of primitives represented by their own classes so that geometric operations such as ray tracing and Constructive Solid Geometry could be done without reference to the mesh class.

Many 3D editing packages offer a choice of Mercator-style spheres and Geospheres.

Mercator spheres divide the sphere into lines of lattitude and longitude using the sin and cosine of these two angles like a Mercator projection on a globe. The problem with these is that they use more triangles towards the poles and have problems if we want to map textures to the sphere.

Geospheres generally look nicer with smaller numbers of triangles. They use triangles that are approximately equilateral in a Buckminster Fuller style.

To construct a geosphere it is possible to inflate a near spherical primitive such as a dodecahedron and add extra triangles to increase curvature. This is what I do in the Octet library. Every triangle in the primitive is split into four or more new triangles with each new vertex extended to sit on the sphere. The problem with this is that the triangles become uneven and we need the coordinates of the dodecahedron to start with. There is an excellent dodecahedron wikipedia article that can help with this, however.

After toying with several methods of generating equally spaced triangles on a sphere I looked at the simple method of adapting the Mercator sphere to have roughly equilateral triangles.

First we have to choose the number of longitude subdivisions and divide Pi by this to get our triangle length. Now for each ring as we go down from the north pole, we generate n vertices where n is the number of lengths in that ring ie floor(2*Pi*r/length).

This gives us our vertices, now how to generate triangles? Each strip has a triangles at the top and b triangles at the bottom. These numbers are mostly different. We walk along the ring adding a triangle to whichever edge has the lowest angle value. This algorithm selects roughly equilateral triangles at every step and fulfils our requirements. One side effect of this is that we always get six triangles at the top and bottom of the sphere forming a hexagon.

The next task is how to UV map the sphere. We need to be careful in three places, at the poles and at the meridian or zero angle joinng the poles. At the poles, choosing one UV coordinate, say (0, 0) will result in distortion of the map, so we may choose to duplicate the vertices at the pole seven times and assign coordinates (0, 0), (1/6, 0) .. (1,0). Why seven and not six? This is related to the meridian mapping problem. We need the final triangle to go from ((n-1)/n, v) to (1, v) so that the texture wraps correctly at the seam. So we must duplicate one vertex at the meridian also. If you don’t understand this, try it yourself!

On unmapped spheres we will want to choose to not duplicate these vertices as they add an overhead. I intend to add this feature later.

Getting started with Gilgamesh.

Start by cloning Gilgamesh and its dependencies (glm and minizip).

git clone https://github.com/andy-thomason/gilgamesh
git submodule init
git submodule update

Make a build directory and use cmake to construct a project (makefiles or visual studio)

mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..

Now you can build the examples by running Visual studio or make.

Happy meshing!

Minizip – a small zip library

I’ve recently split my zip decoder off from Octet (and Vookoo) and it has a project of its own.

It lives here: https://github.com/andy-thomason/minizip

Minizip is a tiny modern C++ library for decoding zip files and other file formats containing the Deflate compression method such as PNG.

It is going to be used for the Zegami image library project in Oxford by my student David Babera and so should get some testing.

There are two classes at present:


minizip::decoder
minizip::zipfile_reader

The decoder class takes a block of deflate encoded data and produces raw bytes. The zipfile_reader class manages a mapped zip file and enables extraction of data in parallel from Zip files.

I may move my bzip2 decoder into this project also to avoid duplication of effort. Zip files can optionally include Bzip2 compressed data.

I will also write a microscopic version of an LZMA codec if I get the time.

This is a basic example of use:


#include <minizip/decoder.hpp>
#include <minizip/zipfile_reader.hpp>

int main() {
zipfile_reader reader(if_zip, if_zip + sizeof(if_zip));

// Get a list of file names.
auto names = reader.filenames();

// Read the first file using the name as a lookup
std::vector<uint8_t> text = reader.read(names[0]);
std::cout.write((const char*)text.data(), text.size());

// Get a list of directory entries.
auto entries = reader.dir_entries();

// Read the first file using the directory entry.
std::vector<uint8_t> text2 = reader.read_entry(entries[0]);
std::cout.write((const char*)text2.data(), text2.size());
}

Getting OpenGL to run on a headless server

Today I have been looking for methods of running OpenGL – and ultimately Vulkan – on a headless server so that we can do thin client experiments on Vookoo, our vulkan wrapper library.

There is an excellent article here: http://renderingpipeline.com/2012/05/windowless-opengl/ on the subject which gave me some basic clues about running GLX, the interface to OpenGL on Linux machines.

I did a few little experiments and discovered that it was quite easy to run OpenGL examples such as glxgears http://linux.die.net/man/1/glxgears from an SSH session provided you had a working X server running and knew the display name.


$ ssh me@myserver.com
$ export DISPLAY=:1 # note: you should check this variable on your xterm
$ glxgears

And you should have glxgears (from the mesa utils) running on your remote desktop.

Of course, we don’t want to actually do this as you can’t see the result unless you are in the room with the server, but it does enable us to create framebuffers on our server, render to them and pump the result to an android app or web page.

Thin clients are likely to become more popular as internet data rates improve. The old HTML-based web model is likely to migrate to web pages that are merely windows on dynamic server generated content.

Our goal this summer at Goldsmiths College is to create a sample thin client VR system that pumps 3D content to a smartphone or desktop app so that multiple users can play a game or enjoy a visual experience on a single server.

We hope to be applying thin client technology to molecular dynamics (Bioblox) and art (Mutator).