These are some of the projects I've recently worked on (ca. past 2 years). All of them are written in C or C-like C++. 日本語版はこちらです。
To start something, an internal barrier needs to be crossed. The less steps it takes to start, the lower the barrier that needs to be crossed. For small experiences you want people to try out, understanding this concept is important. This is a 3D-game-engine (WIP) built to enable realization of smaller 3D games, video-toys and interactive-art. The core concepts are ease of distribution and ultimately ease of development.
Packing the game into one exe, which you just need to download and doubleclick, is what I wanted the games made with this engine to be. Managing distribution is easier too. All you need is to just send people the exe and they can try it out immediately. To achieve this, the assets need to be packed into the executable itself. While there are many ways to achieve this, after thinking about the problem space I settled for the following method: After creating the assets, and exporting them to the gltf model format, a program converts the gltf-model-data into a custom C-struct based format. The struct is then printed out as a C-declaration, which the compiler parses to create the executable. Binary data such as images are included into the executable as binary data and do not need to be parsed at compile-time. Compile-times stay fast. Runtime loading of data is no longer necessary. The model-data can then be used in code. Buffer data just needs to be potentially decompressed and uploaded to the GPU. Since the data is part of the program, and not external, the references to that data, which would usually need to be something like strings, are also just C variables that are usable in the code. No hashtable lookups or the like are necessary. Just use the variable names. Your favorite IDEs renaming functionality will also just work and correctly rename any references to these varibale names. Furthermore, if something is wrong with the data, if data is missing, if you misstyped a name, you will know at compile time, and your compiler will tell you exactly what was wrong. Data becomes part of the code, and gains all the utility that comes with it. This is all critical for the second point, ease of development. Game development is already hard enough, so it is important to make the process as seamless as possible. Fighting your tools is not a good use of ones time. Days and weeks can be spent just on such useless battles. Its better to use our tools as weapons and fight with them! I digress. The more streamlined the process is, the better. And asset-management is of utmost importance. For small games a complicated system is a hindrance. Its more useful if everything is readily available in the code and can be manipulated with code, allowing creativity while also staying streamlined with everything else the creator does. Code is powerful, regular, familiar and we already have tools for it. Nothing custom was necessary to begin with. Now unto the other parts of this demo:
Game-physics code is written by me. Ingredients are AABB-BVH, raycast, capsule-triangle-penetration-test, non-rotational-contact-resolution and some helper math functions. Around 800 loc and you have enough to make a character run around and not walk into things! Ultimately I want to add a more complete physics-library, so I'm working on physics code on the side (see below). For many games this much is already sufficient, and anyone who wants can use any of the free physics-libraries around so my focus is on other things for now. That being said, having straightforward physics code, instead of the simulation-world approach found in physics-libraries, allows for more precisely implementing character controls and other gameplay features.
Straightforward gpu-skinned-skeletal-animation with interpolation between different animations. Focus is on simplicity first and foremost.
Skinning is done in a compute shader with the results written into a buffer, allowing the skinned-vertices to be reused. This has the additional benefit that the same vertex-shader can be used for static and dynamic geometry.
IK-functionality (using the FABRIK algorithm) is included too.
In the demo, it is used to adjust the characters legs based on the terrain. Simple, but adds a lot of value.
I choose to use stencil-volume-shadows. Mostly because I found the idea cool. The patent for Carmack's Reverse also expired, so it should be good to use. Plus side of stencil volume shadows are that as long as you have enough budget to calculate the shadow-volume and have enough rasterization headroom there is not much the developer needs to think about. The quality of shadows over the whole image will be the same and there is little tweaking necessary. Again I thought this is a good fit for simple games. The downside is that shadows are always sharp. This can sometimes seem like an artifact, especially around shadow-casting edges. I have some ideas about improving this that are still in development. Basically if it is possible to add distance from casting edge information into a buffer then such problems can be smoothed out. The computation of the shadow-volume is done in a compute shader.
‐ GPU State management, drawing and compute code using command lists; ‐ 3D debug drawing; ‐ json and gltf-parser (only for the asset generation prepass) ‐ etc.
The majority of the code was written by me. If you keep things simple, not much code is necessary. Libraries used are:
stb_image by nothingsThanks to both!
Assets used:
skybox from skiingpenguins skybox-packAll other assets, such as textures, 3D-models and animation, are made by me. Also a bit about the process of creating these: Textures are made from photos I found on the net or took myself and edited with gimp to tile properly. 3D Models and animations are made with Blender. I find Blender hard to use from a workflow perspective, so I am developing a 3D modeling software as another project (see below). One day I wish to escape Blender and use my own modeler. Not only for modeling, but also as a level editor for smaller games, integrating the whole workflow process into one tool.
Working on models in Blender, I found the workflow to leave a lot to be desired. While it is a powerful tool, it is also quite unwieldy in places. While it is possible to extend via add-ons, changing the core of the modeling engine is not a simple task. Thinking about what to do to improve this situation for myself, I came up with the idea that if I built a modeling program of my own, I could use it also for more than just modeling. Integrating the modeler into a game-engine allows for many possibilities. Using it as a fundament for a level-editor. Generating models and art via code. Using the features for gameplay. Etc. I set out to program this tool. The core of the tool is similar to the BMesh Datastructure; A mesh-description datastructure that imposes nearly no limits on the mesh topology, and as such makes it relatively easy to implement features as a series of small operations. Compared to BMesh, I use ids instead of pointers. This structure and several modeling-functions that act on it are realized in a small C99 library that can be imbedded into other programs and used from other programming languages as well. The code for the editor uses this functionality to realize its editing capabilities. An interesting point from the view as a developer is how the undo-functionality is realized. Since the way I implemented the mesh-structure is based on using int-ids as references (instead of pointers) the data structure itself is triviably copyable. As such, to undo an operation, you can just replace the current data by binary-copying the last state. Commits can be stored as binary-diffs from the last state, saving memory.
Polygon modeling via loop-cuts, extrudes and such. Surface-distance-based-proportional-fallof-vertex-displacement, which is also called 'sculpting'. To make this work fast, at the start of the operation, I convert the bmesh data-structure into a more easily iteratable vertex-connection data-structure. From this data the approximate fallof is calculated every frame, and vertices are displaced based on this falloff. Projective-texture-painting. (still pre-alpha but the basics work) For this, I rasterize the triangle in Texture/UV-Space, and for every rasterized pixel of the texture, I check if it is projected into the drawn line. If so, it receives color based on the selected brush.
Libraries used are:
xatlas by jpcyA C like programming language that compiles to memory-operand-based-bytecode which is then read and executed by a interpreter/vm. Rather than the programming language the focus lies more on the VM. The language was created mostly so that I had a way to feed program code to the VM without writing asm.
Not a perfect comparison, but on my machine it compiles and computes fib(35) faster than python, lua and lua-jit (with the jit disabled). See here for the times.
This is how it looks:
proc fib (int n) (int r) {
if (n <= 1) {
r = n;
return;
}
r = fib(n - 1).r + fib(n - 2).r;
}
proc main (int input) (int output) {
int a;
a = fib(35).r;
print a;
}
And this is the vm-bytecode for the above function. The stack layout is described by the 'val'-lines. These stack values can then be referenced with their names in the instructions themselves, often resulting in less overall instructions compared to a pure stack-machine approach. Note that in the actual bytecode these references are just stack pointer offsets.
asm:
| JUMP_C main,0, 0
fib:
| | val: r; size: 4, align: 4, offset: 0, type: int
| | val: n; size: 4, align: 4, offset: 4, type: int
| | val: tv_0; size: 4, align: 4, offset: 16, type: int
| u4_LE_SSC tv_0, n, 1
| J0_CS then_0,tv_0, 0
| COPY_SS r, n, size(int)
| RETURN_I 0, 0, 0
then_0:
| | val: tv_0; size: 4, align: 4, offset: 32, type: ( int r)
| | val: tv_1; size: 4, align: 4, offset: 36, type: int
| | val: tv_2; size: 4, align: 4, offset: 40, type: int
| u4_SUB_SSC tv_1, n, 1
| CALL_CC fib,32, 0
| | val: tv_3; size: 4, align: 4, offset: 48, type: ( int r)
| | val: tv_4; size: 4, align: 4, offset: 52, type: int
| | val: tv_5; size: 4, align: 4, offset: 56, type: int
| u4_SUB_SSC tv_4, n, 2
| CALL_CC fib,48, 0
| u4_ADD_SSS r, tv_0, tv_3
| RETURN_I 0, 0, 0
| STOP 0, 0, 0
main:
| | val: output; size: 4, align: 4, offset: 0, type: int
| | val: input; size: 4, align: 4, offset: 4, type: int
| | val: a; size: 4, align: 4, offset: 8, type: int
| | val: tv_0; size: 4, align: 4, offset: 32, type: ( int r)
| | val: tv_1; size: 4, align: 4, offset: 36, type: int
| SET_SC tv_1, 35, 0
| CALL_CC fib,32, 0
| COPY_SS a, tv_0, size(int)
| u4_PRINT_S a, 0, 0
| STOP 0, 0, 0
\asm
Above I mentioned 'memory-operand-based-bytecode'. Here is a short explanation of that concept. In a stack machine, all operations work on the stack, pushing and popping values. An easy to understand model, but it results in lots of instructions and thus slow VM-Code. A way out is optimizing it into a better form by jitting. In comparison, a register machine works on values stored in registers. These values are often fetched from memory and stored to memory. To reduce the amount of necessary registers and/or reduce overall instruction count for a given program, instead of loading a value from memory into a register, executing the arithmetic and again storing the value from a register to memory, we could also just directly reference the memory-address in the instruction instead. There are many reasons to use registers in a physical machine, but I thought that maybe there aren't many reasons to use registers in a virtual machine. After all, in a VM both are stored in the same memory. So I programmed a VM where all instructions use operands that are direct values, stack-pointer-relative-, or stored-pointer-relative-memory-references. The speed of the VM is acceptable, and it is easy to generate the bytecode for a given program. Furthermore, it is trivial to turn this into register-machine code by running a register allocator on the Instructions. So incidentially this might also make an OK intermediate representation for a simple compiler or allowing easy jitting.
Some work in progress game-physics code. Position-based-dynamics-style resolution of contact constraints. Basically I just wanted my own physics library for making simple, physical 3D games. Mostly in backlog, since my current game-ideas dont need full physics.
These were some of the projects I worked on that I thought would be worth showing. If you have any questions, suggestions or offers, feel free to send me a mail or send me a message on discord (user: _ymd_ ).