Wednesday, November 06, 2013

Duna Landing

Today I landed on Duna (fourth planet from the Sun):



The landing was miraculous. I completely ran out of fuel before I was even in orbit around Duna. My approach velocity was way too high to get into orbit, so I used my last bit of fuel to aim my trajectory at Duna's atmosphere. Incredibly, burning through the atmosphere slowed me down enough to get into orbit. The orbit took me around into a second pass through the atmosphere. On the second pass I deployed my parachutes. Unfortunately Duna's atmosphere isn't very thick, so most of my ship was destroyed by the surface impact.

Here is the ship launching from Kerbin:



If you look closely, you can see my main thruster is a nuclear thermal rocket.

My Mun landing happened at 13 hours of gameplay and the Duna landing was at 19 hours.

Tuesday, November 05, 2013

Landing on the Mun

Today I achieved the impossible: safely landing a kerbal on the Mun.



I landed on the dark side of the Mun, so it is a bit hard to see. Next time I'll bring some lights on the ship. Unfortunately the ship is completely out of fuel, so Bill will have to wait there until I send a rescue mission. Here is the ship preparing for launch from Kerbin:



Here is the support crew, orbiting the Mun:

Wednesday, October 30, 2013

Fluid Simulation

http://www.byronknoll.com/smoke.html

This is a simulation of the Navier-Stokes equations. The implementation is based on this paper:

Stam, Jos (2003), Real-Time Fluid Dynamics for Games

Unfortunately the simulation turned out to be very slow, so my demo has a tiny grid size.

Saturday, October 26, 2013

Normal Mapping



I made a HTML5 shader using normal mapping: http://www.byronknoll.com/dragon.html.

I made this based on a demo by Jonas Wagner. I actually found some mistakes in the underlying math in Jonas Wagner's demo, which I fixed in my version.

I got the dragon model here. I rendered the dragon using Blender. Here is the texture and normal map:



I now have a gallery of the HTML5 demos I have made: http://www.byronknoll.com/html5.html

Thursday, October 17, 2013

Redesign

I just updated my homepage with a new CSS layout. Let me know if you see any issues or have suggested improvements.

Wednesday, October 09, 2013

Deformable Textures in HTML5 Canvas

Inspired by this HTML5 demo, I decided to try to add a texture to the HTML5 blob I made earlier. Here is my attempt: http://www.byronknoll.com/earth.html



Initially I was stuck because it is too slow to actually manipulate and render individual pixels to HTML5 canvas. In order to get a decent framerate you need to either use vector graphics or render chunks of a raster image using drawImage(). My breakthrough came when I read this stack overflow thread. Using the transform() method, you can perform a linear transformation on regions of a raster image. I split up the image of earth into pizza slices and mapped each slice onto the blob with the appropriate transformation (to match up with the boundary vertices of the blob). Seems to work well and even has a decent framerate on my phone.

Thursday, September 12, 2013

Oculus Rift Review



I have been playing with the Oculus Rift developer kit (thanks to my roommate who ordered it). The developer kit costs $300.

It was very easy to set up - all you have to do is plug in the cables and it works. The first problem I encountered was with the size of the headband. The *maximum* length setting is barely big enough for my head. It will probably be uncomfortable for anyone with a bigger head than me.

I started out by trying some demos from https://share.oculusvr.com/. The initial experience is fantastic - more immersive than any gaming experience I have tried before. The headtracking and eye focus feel very natural. It also has a great field of view. The display quality is a bit disappointing however. Due to the low resolution and magnification from the lens, you can see individual pixels and the black borders between pixels. The display also becomes quite blurry towards the edges. Text is basically unreadable unless it is large and near the center of the display. The consumer edition of the Oculus Rift will have a higher resolution display, so hopefully that improves things. The color quality and brightness of the display seem to be nice though.

The headtracking is not perfect. If you concentrate you can see the lag between your head movement and the display being updated. However, it is good enough that you don't notice it unless you are specifically looking for it.

The 3D effect works very well. Since each eye gets its own display, parallax works perfectly and your brain correctly interprets depth information. Much better than the 3D effect at a theater.

Wearing the headset is comfortable, although I started feeling a bit dizzy/nauseated after using it for ~30 minutes.

Games have to specifically add support for the Oculus Rift in order to be compatible. There are currently very few games with support (although that will probably change once the commercial product gets released). I bought Surgeon Simulator 2013. This seems to have pretty good support, although it has some text which is hard to read.



VR goggles and head-mounted displays (HMD) are definitely going to start becoming popular within the next couple years. Sony is rumored to be developing VR goggles for the PS4. Sony has already released a HMD intended for watching movies and 3D content. The cinemizer OLED does the same thing. Google Glass will be released soon (along with several direct competitors). I am more excited about these types of displays than I am about the Oculus Rift. I want a high quality HMD that can completely replace my monitor. The HMZ-T2 and Cinemizer are not good enough for this yet.

I did some research into building my own HMD. Building a clone of the Oculus Rift is apparently feasible. I don't care about having head-tracking or a high field of view. Instead I would prefer having higher display quality and less distortion around the edges (by having a lower magnification lens). This would make the display more usable for reading text and watching movies (which the Oculus Rift is terrible at). I tried removing the lenses from the Oculus Rift and looking directly at the display. Unfortunately the image is too close to focus - apparently the lens reduces your minimum focus distance. I think swapping in some lower magnification lenses could improve things.

Tuesday, September 10, 2013

Saturday, August 31, 2013

Compass Belt



Over the last week I have been assembling a compass belt. It is a belt lined with ten motors controlled by an arduino. The motor closest to north vibrates so that the person wearing the belt always knows which direction is north. I am not the first person to build a compass belt: Here is a list of all the parts I used: Total cost: $174.13 (although a subset of this cost is for tools - just the belt components would be $143.89). Thanks to dllu for helping me design the circuit. It turned out to be a lot more time consuming to build than I was anticipating. I blame it mostly on the terrible soldering iron (which doesn't get hot enough). Here I am wearing the belt:

Here is a picture of the inside:

The belt seems to work well - it is quiet, accurate, and updates instantly when I turn around. I haven't actually found it useful yet - it seems pointless to wear it in places that I am already familiar with (and it looks silly). Even if I don't end up using it, it was fun to build and I learned a lot about electronics.

Saturday, June 08, 2013

Wednesday, May 22, 2013

Unlabeled Object Recognition in Google+

Google+ released an amazing feature that uses object recognition to tag photos. Here is a Reddit thread discussing it. Generalized object recognition is an incredibly difficult problem - this is the first product that I have seen which supports it. This isn't a gimmick - it can recognize objects in pictures without *any* corresponding text information (such as in the filename, title, comments, etc). Here are some examples on my photos (none of these photos contain any corresponding text data to help):


At first I thought this last one was a misclassification, until I zoomed in further and saw a tiny plane:


Of course, there are also many misclassifications since this is such a hard problem:

This squirrel came up for [cat]:

This train came up for [car]:

This goat came up for [dog]:

These fireworks came up for [flower]:

This millipede came up for [snake]:

Sunday, May 19, 2013

PAQclass

I have released an open source classification algorithm called PAQclass: https://code.google.com/p/paqclass/. I originally created PAQclass when I was working on my master's thesis but I never got around to releasing it until now. PAQclass does classification via data compression. It uses one of the best compression programs available: PAQ8. Although it is very slow, I think PAQclass is probably close to state of the art for text categorization tasks.

Friday, May 17, 2013

Fathauer Fractal Tessellation



I have implemented a fractal in HTML5 canvas: http://www.byronknoll.com/kites.html. It is based on a fractal discovered by Robert Fathauer. Here is a zoomed-in view of the top of my fractal:



The fractal seems to have some interesting properties. Near the center of the zoomed-in image above you can see a "tunnel" of consecutively smaller triangles. There are an infinite number of these tunnels (and I think they exist in every direction). Along the vertical you can see there is pattern to the location of the tunnels. The distance between the tunnels is a geometric sequence with a common ratio of 1/3. As you travel into a tunnel, the hypotenuse length of consecutive triangles is also a geometric sequence with the same common ratio of 1/3.

Friday, May 10, 2013

O(n log n)

I have released a new version of my visibility polygon library. I improved the time complexity from O(n^2) to O(n log n) (where n is the number of vertices). O(n log n) is actually the *optimal* time complexity for this problem (since this problem can be reduced to sorting, and we know sorting is O(n log n)).

A high level description of the algorithm: First, sort all vertices according to their angle to the observer. Now iterate (angle sweep) through the sorted vertices. For each vertex imagine a ray projecting outwards from the observer towards that vertex - all we need to compute is the closest "active" line segment in order to construct the visibility polygon. The closest active line segment must be computed in O(log n) time (for each vertex). I used a special type of heap to accomplish this. The heap keeps track of all active line segments (arranged by distance to the observer). The closest line segment is at the root of the heap. Since line segments don't intersect (a constraint in the problem definition), the heap remains consistent. This property is essential - if the distance ordering between two line segments could change depending on the angle, then a heap would no longer work. So, we can find the closest segment in O(1) time and insert new segments in O(log n) time. The reason the heap I used is "special" is because it also allows removing line segments (when they are no longer active) in O(log n) time. Inactive line segments can't just be ignored and left in the heap - they need to be removed to maintain a consistent distance ordering. In a standard heap, removing an arbitrary element takes O(n) time (since it takes linear time just to find the element). My heap contains an additional map structure from element value to heap index, so elements can be found in O(1) time. Once an element is found, we swap in the last element in the tree and propagate it either up or down (which takes O(log n) time) to maintain heap correctness. Hooray!

Wednesday, May 01, 2013

Visibility Polygons



I have released an open source JavaScript library for computing visibility polygons: https://code.google.com/p/visibility-polygon-js/

Demo: http://www.byronknoll.com/visibility.html

I don't think there are any other JavaScript libraries which compute visibility polygons. A few years ago I released a Java game that used visibility polygons.

Thursday, April 25, 2013

Google Bikes


An article about Google Bikes is currently on the homepage of wired.com. If you watch the video about conference bikes, I am one of the riders (facing backwards).

Saturday, April 20, 2013

Canvas Demo

http://www.byronknoll.com/geb.html

I didn't use webgl for this demo - just canvas polygons. This is actually the first time I have made a 3D rendering engine. For the simple model in this demo the framerate appears to be high (even on my phone).

Sunday, April 14, 2013

3D Prints

Here are some more 3D prints I ordered from Shapeways:







So, the second version of my monostatic body actually works! You can order the design from Shapeways.

Saturday, March 30, 2013

Monostatic Body

I received the 3D print of the one-sided die that I designed. Unfortunately it sometimes gets stuck upside-down on the unstable equilibrium. I have designed a new version which I think should fix the problem: http://www.shapeways.com/model/979729/monostatic-body.html. This version has a ridge on top which should make the unstable equilibrium... less stable:


The ridge causes the center of mass to raise a bit (to 0.9mm below the cylinder's center). I feel confident that this version of the shape should work - it will take about two weeks until I get the printed copy from Shapeways.