Monday, September 04, 2006

Z800 Joystick Emulator Released

After what seems like an eternity waiting for Emagin to release a joystick driver or emulator for the Z800, I have decided to release my own joystick emulator.

This builds on the great work done by Deon van der Westhuysen in his PPJoy virtual joystick driver. PPJoy allows you to setup a virtual joystick and then write a piece of code to pass information to this virtual joystick. I have written an interface between the Z800 and the virtual joystick which passes the 6DOF tracker information from the Z800 to the virtual joystick as 3 seperate axis. I have tested this in the Flight Simulator X demo and it seems to work great. There is a little bit of drift, but I am pretty sure this is down to the headset rather than my code. This should work on any game that supports a joystick for 'look around' type control.

Anyway, enough talk. You can download the program from my website by clicking here, and there are some brief instructions on setup. If you find the program useful, or you have some questions or ideas then please drop me a line. This is a first release, I will be doing an update to add some options for reset hotkey and sample rate soon so stay tuned.

Monday, August 28, 2006

Scene Graph and Resource Cache UML Class Design

I'm getting back into my game programming again, and I have put the code in to attach one mesh to another (sword to hand of character for example). Seems to work great. It was a bit of a grind going over my classes again, so I used visio to create a UML class diagram showing the basic structure of the scene graph and resource classes. Although I am not touting this as the best solution, it seems to work great and may be of help to amateur game programmers.

Below is shown a screenshot from the latest engine. This shot shows how a mesh such as a sword can be attached to the character's hand using the AttachNode object. The AttachNode object takes the AnimatedMesh reference and the name of the bone to attach to.

Saturday, February 11, 2006

Resource Cache, Mesh and Animation Updates Complete

I just finished a major overhaul to the mesh and animation classes for the engine, and also added a resource cache for more efficient memory management. Before I go into the very lengthy discussion of the changes and what’s next on the horizon here is a screenshot from the latest build.

The Geometry class has now gone in favor of a Mesh class which allows for both cached and dynamically created meshes. Vertex and index buffers are created directly in graphics API managed memory which means they no longer have to be copied from another memory area, thus cutting down on the previous duplicate memory usage. Textures are also now loaded directly into graphics API memory which means the previous duplicate memory usage with the concurrently held Image object has now gone.

The ResourceCache class is the manager class for any class derived from the Resource class. The resource cache is used primarily to keep the most frequently used resources in memory and also to allow shared usage of this resource data. The resource cache only manages resource objects that come directly from files. It has to be this way as the resource cache has to be able to dynamically load and unload resources and this can only happen if the resource exists on disk.

Resources represent data that can be shared and does not include any instance-specific information. For example, the vertex and index data of a character mesh is resource data because it doesn’t change between two instances of the same type of character. Information that does change, such as local and world matrices, instance names etc. are not part of the resource information. In the example of a character mesh the MeshResource would contain the resource data, and the Mesh class would contain the matrices, instance name etc. The Mesh class references the MeshResource object via a CacheDesc object, which basically describes where the resource comes from (usually a filename). When the mesh needs to access the MeshResource information it makes a call to the resource cache using the CacheDesc object. The resource cache then looks to see if the object is in memory, if it is then it simply passes a pointer back. If it is not in memory then it loads the resource from the CacheDesc and passes back a pointer.

Classes currently derived from the Resource class include TextureResource, HeightmapResource, AnimationResource and MeshResource. The Mesh and Texture classes also allow creation and saving of dynamically created meshes and textures. In the example of the Mesh object this is done by having a direct pointer to a MeshResource object that is not managed by the resource cache (resource cache only manages file-based resources). On creation of a mesh object you specify if the mesh is coming from a file in which case the CacheDesc property will be used, or if it is dynamic in which case the pointer to the MeshResource object will be used.

As you may have gathered from above, models and animations are now exported as separate files which means you can share animations between models which will help cut down on the memory usage and file sizes as well as making it possible to plug new animations into the engine without having to re-export all the models.

Animations are exported as ‘animation sets’. Each animation set can contain multiple animations denoted by the name, start and end frames of the animation. One or more animation sets are then bound to an AnimatedMesh object during run-time which then copies references to the animations to the available animations for that mesh. Issuing a ‘Play’ call to the AnimatedMesh object then sets a reference to the AnimationSet object which is then used to update the bones matrices each frame. At the moment the vertices are updated from the bone matrix to a dynamic underlying Mesh object that belongs to the AnimatedMesh class which is then rendered when the AnimatedMesh object is rendered. In the future this will be replaced with a vertex shader that will pass the matrix values into shader registers and the bone/weight table in with the vertex data to allow the model to be rendered using the static vertex set that can stay on the video card memory without being altered.

Ok, that was a little more technical than I was planning to get for this blog update, but maybe someone will find this more technical information useful. The long and the short of it is that from a rendering and animation standpoint we are now looking pretty solid design wise. Some of the smaller changes that are more visible include adding some fog for distance rendering and a skybox.

The next big changes in store for the core engine will be the implementation of AABB against static and animated mesh objects, a dynamic quadtree class, attachment tags for meshes, more user interface code, indoor environment rendering and a particle system. Once these are done then the real test will be using all the components to build an actual (be it single player) game.

Outside of and before the engine changes though, I must put a static object manifest and static object placement functionality into the terrain editor. With the already written 3D Studio export scripts this will allow world designers to start putting together environments for testing. On this subject I also had a thought to calculate a shadow map for the terrain based on the light positions and object placement. I am not sure how complex this will be, but I think it will add to the overall aesthetics considerably. Priority-wise this is probably pretty low, and it will probably remain a wish list item for the time being.

Thursday, October 13, 2005

Hierarchial Animation Working In Engine

Great news. I have sucessfully got my hierarchial animation working in my game engine now. It took a while to figure out the nuances of 3D Studio Max's left handed co-ordinate system for my export script, but it's finally working. You can download a video here of it in action. You will need a software DVD video player in order to play the clip. The minataur figure is a 3D Studio sample from the website of the game Neverwinter Nights. I exported it using my Max export script into my engines proprietary format.

The next thing I am going to work on is my GUI library and event model. This will allow me to have menus, dialogs, buttons etc in the game. I will then take my terrain editor and convert it to use this GUI so I can finally get rid of MFC.

Thursday, October 06, 2005

Blog Feed Active

You can now subscribe to an feed of this blog powered by Feedburner. The atom feed is available at

Why not put it on your personalized google home page! ;)

Wednesday, August 10, 2005

Using 5.1 Audio and a Microphone as a Tracking Device

Ok, this is just a thought and may be completely unfeasible but I was trying to think the other night how to create a tracking device using regular objects that are low cost and in common use.

I was reading up on how GPS works by sending radio signals from satellites that are sync'd with atomic clocks and then calculating the distance from each satellite by working out how long the signal took to get from the satellite to the receiver thereby allowing the signal to be triangulated. This is an over-simplified view I'm sure, but bear with me.

So, my next thought was... why can't something like this be done for a local setup. Obviously radio waves would travel far too quickly so something slower would be needed, so my first thought was sound.

What if a 5.1 system sent regular high frequency blips from each of it's speakers (each speaker a slightly different frequency) and you used a microphone as a tracker and had software to calculate the relative distance between each of the speakers based on the order and timing of the frequencies on the sampled signal?

I am not a physics or math expert, but I have tried some lamens calculations here. The Soundblaster Audigy can sample at a rate of 96khz. Sounds travels in air at approximately 345 meters per second. Therefore to travel one meter it takes 0.00289 seconds. If 96khz means 96000 samples per second, then that gives a sample every 0.0000104166 seconds. That means a potential 277 samples per meter which could give an accuracy of 0.00361 meters (or 0.3cm). Sounds good. Obviously this is assuming the software could process the signal in realtime. I am assuming some type of Fourier Transform would be needed to split the frequencies back out. I am not sure how quickly that can be done.

Can anyone who knows more about physics than me possibly comment?

Wednesday, July 27, 2005

HMD Field of View Comparison

There has been some discussion and confusion over field of view sizes for the Z800 in comparison to other head mounted displays. The fact that the individual manufacturers quote in different sizes does not make this any easier. To help clarify, I have calculated the virtual screen sizes at the same distance based on the published field of view specificiations for the major models. My results are shown below.

Cy-Visor DH4400

31 degree FOV equivalent to 44" at 2m

i-Glasses 3D

26 degree FOV equivalent to 36.4" at 2m

Sony PLM-S700(E)

38 degree FOV equivalent to 54.2" at 2m

Emagin Z800

40 degree FOV equivalent to 57.3" at 2m

Hope this helps!