Tetra3D is a 3D hybrid software / hardware renderer written in Go by means of Ebitengine, primarily for video games. Compared to a professional 3D rendering system like OpenGL or Vulkan, it's slow and buggy, but it's also janky, and I love it for that. Tetra3D is largely implemented in software, but uses the GPU a bit for rendering triangles and for depth testing (by use of shaders to compare and write depth and composite the result onto the finished texture). Depth testing can be turned off for a slight performance increase in exchange for no visual inter-object intersection.
Tetra3D's rendering evokes a similar feeling to primitive 3D game consoles like the PS1, N64, or DS. Being that a largely-software renderer is not nearly fast enough for big, modern 3D titles, the best you're going to get out of Tetra is drawing some 3D elements for your primarily 2D Ebitengine game, or a relatively simple fully 3D game (i.e. something on the level of a PS1, or N64 game). That said, limitation breeds creativity, and I am intrigued at the thought of what people could make with Tetra.
Tetra3D also gives you a Blender add-on to make the Blender > Tetra3D development process flow a bit smoother. See the Releases section for the add-on, and this wiki page for more information.
Why did I make it?
Because there's not really too much of an ability to do 3D for gamedev in Go apart from g3n, go-gl and Raylib-go. I like Go, I like janky 3D, and so, here we are.
It's also interesting to have the ability to spontaneously do things in 3D sometimes. For example, if you were making a 2D game with Ebitengine but wanted to display just a few GUI elements or objects in 3D, Tetra3D should work well for you.
Finally, while this hybrid renderer is not by any means fast, it is relatively simple and easy to use. Any platforms that Ebiten supports should also work for Tetra3D automatically. Basing a 3D framework off of an existing 2D framework also means any improvements or refinements to Ebitengine may be of help to Tetra3D, and it keeps the codebase small and unified between platforms.
Why Tetra3D? Why is it named that?
Because it's like a tetrahedron, a relatively primitive (but visually interesting) 3D shape made of 4 triangles. Otherwise, I had other names, but I didn't really like them very much. "Jank3D" was the second-best one, haha.
How do you get it?
go get github.com/solarlune/tetra3d
Tetra depends on Ebitengine itself for rendering. Tetra3D requires Go v1.16 or above. This minimum required version is somewhat arbitrary, as it could run on an older Go version if a couple of functions (primarily the ones that loads data from a file directly) were changed.
There is an optional Blender add-on as well (tetra3d.py) that can be downloaded from the releases page or from the repo directly (i.e. click on the file and download it). The add-on provides some useful helper functionality that makes using Tetra3D simpler - for more information, check the Wiki.
How do you use it?
Load a scene, render it. A simple 3D framework means a simple 3D API.
Here's an example:
package main
import (
"errors""fmt""image/color""github.com/solarlune/tetra3d""github.com/hajimehoshi/ebiten/v2"
)
type Game struct {
GameScene *tetra3d.Scene
Camera *tetra3d.Camera
}
funcNewGame() *Game {
g := &Game{}
// First, we load a scene from a .gltf or .glb file. LoadGLTFFile takes a filepath and// any loading options (nil can be taken as a valid default set of loading options), and // returns a *tetra3d.Library and an error if it was unsuccessful. We can also use // tetra3d.LoadGLTFData() if we don't have access to the host OS's filesystem (if the // assets are embedded, for example).
library, err := tetra3d.LoadGLTFFile("example.gltf", nil)
if err != nil {
panic(err)
}
// A Library is essentially everything that got exported from your 3D modeler - // all of the scenes, meshes, materials, and animations. The ExportedScene of a Library// is the scene that was active when the file was exported.// We'll clone the ExportedScene so we don't change it irreversibly; making a clone// of a Tetra3D resource (Scene, Node, Material, Mesh, Camera, whatever) makes a deep // copy of it.
g.GameScene = library.ExportedScene.Clone()
// Tetra3D uses OpenGL's coordinate system (+X = Right, +Y = Up, +Z = Backward [towards the camera]), // in comparison to Blender's coordinate system (+X = Right, +Y = Forward, // +Z = Up). Note that when loading models in via GLTF or DAE, models are// converted automatically (so up is +Z in Blender and +Y in Tetra3D automatically).// We could create a new Camera as below - we would pass the size of the screen to the // Camera so it can create its own buffer textures (which are *ebiten.Images).// g.Camera = tetra3d.NewCamera(ScreenWidth, ScreenHeight)// However, we can also just grab an existing camera from the scene if it // were exported from the GLTF file - if exported through Blender's Tetra3D add-on,// then the camera size can be easily set from within Blender.
g.Camera = g.GameScene.Root.Get("Camera").(*tetra3d.Camera)
// Camera implements the tetra3d.INode interface, which means it can be placed// in 3D space and can be parented to another Node somewhere in the scene tree.// Models, Lights, and Nodes (which are essentially "empties" one can// use for positioning and parenting) can, as well.// We can place Models, Cameras, and other Nodes with node.SetWorldPosition() or // node.SetLocalPosition(). There are also variants that take a 3D Vector.// The *World variants of positioning functions takes into account absolute space; // the Local variants position Nodes relative to their parents' positioning and // transforms (and is more performant.)// You can also move Nodes using Node.Move(x, y, z) / Node.MoveVec(vector).// Each Scene has a tree that starts with the Root Node. To add Nodes to the Scene, // parent them to the Scene's base, like so:// scene.Root.AddChildren(object)// To remove them, you can unparent them from either the parent (Node.RemoveChildren())// or the child (Node.Unparent()). When a Node is unparented, it is removed from the scene// tree; if you want to destroy the Node, then dropping any references to this Node // at this point would be sufficient.// For Cameras, we don't actually need to place them in a scene to view the Scene, since// the presence of the Camera in the Scene node tree doesn't impact what it would see.// We can see the tree "visually" by printing out the hierarchy:
fmt.Println(g.GameScene.Root.HierarchyAsString())
// You can also visualize the scene hierarchy using TetraTerm:// https://github.com/SolarLune/tetratermreturn g
}
func(g *Game) Update() error { returnnil }
func(g *Game) Draw(screen *ebiten.Image) {
// Here, we'll call Camera.Clear() to clear its internal backing texture. This// should be called once per frame before drawing your Scene.
g.Camera.Clear()
// Now we'll render the Scene from the camera. The Camera's ColorTexture will then // hold the result. // Camera.RenderScene() renders all Nodes in a scene, starting with the // scene's root. You can also use Camera.Render() to simply render a selection of// individual Models, or Camera.RenderNodes() to render a subset of a scene tree.
g.Camera.RenderScene(g.GameScene)
// To see the result, we draw the Camera's ColorTexture to the screen. // Before doing so, we'll clear the screen first. In this case, we'll do this // with a color, though we can also go with screen.Clear().
screen.Fill(color.RGBA{20, 30, 40, 255})
// Draw the resulting texture to the screen, and you're done! You can // also visualize the depth texture with g.Camera.DepthTexture().
screen.DrawImage(g.Camera.ColorTexture(), nil)
// Note that the resulting texture is indeed just an ordinary *ebiten.Image, so // you can also use this as a texture for a Model's Material, as an example.
}
func(g *Game) Layout(w, h int) (int, int) {
// Here, by simply returning the camera's size, we are essentially// scaling the camera's output to the window size and letterboxing as necessary. // If you wanted to extend the camera according to window size, you would // have to resize the camera using the window's new width and height.return g.Camera.Size()
}
funcmain() {
game := NewGame()
if err := ebiten.RunGame(game); err != nil {
panic(err)
}
}
You can also do collision testing between BoundingObjects, a category of nodes designed for this purpose. As a simplified example:
type Game struct {
Cube *tetra3d.BoundingAABB
Capsule *tetra3d.BoundingCapsule
}
funcNewGame() *Game {
g := &Game{}
// Create a new BoundingCapsule named "player", 1 unit tall with a // 0.25 unit radius for the caps at the ends.
g.Capsule = tetra3d.NewBoundingCapsule("player", 1, 0.25)
// Create a new BoundingAABB named "block", of 0.5 width, height, // and depth (in that order).
g.Cube = tetra3d.NewBoundingAABB("block", 0.5, 0.5, 0.5)
// Move the cube over on the X axis by 4 units.
g.Cube.Move(-4, 0, 0)
return g
}
func(g *Game) Update() {
// Move the capsule 0.2 units to the right every frame.
g.Capsule.Move(0.2, 0, 0)
// Will print the result of the Collision, (or nil), if there was no intersection.
fmt.Println(g.Capsule.Collision(g.Cube))
}
If you wanted a deeper collision test with multiple objects, you can do so using IBoundingObject.CollisionTest(). Take a look at the Wiki and the bounds example for more info.
That's basically it. Note that Tetra3D is, indeed, a work-in-progress and so will require time to get to a good state. But I feel like it works pretty well as is. Feel free to examine all of the examples in the examples folder. Calling go run . from within their directories will run them - the mouse usually controls the view, and clicking locks and unlocks the view.
There's a quick start project repo available here, as well to help with getting started.
For more information, check out the Wiki for tips and tricks.
What's missing?
The following is a rough to-do list (tasks with checks have been implemented):
3D rendering
-- Perspective projection
-- Orthographic projection (it's kinda jank, but it works)
-- Automatic billboarding
-- Sprites (a way to draw 2D images with no perspective changes (if desired), but within 3D space) (not sure?)
-- Basic depth sorting (sorting vertices in a model according to distance, sorting models according to distance)
-- A depth buffer and depth testing - This is now implemented by means of a depth texture and Kage shader, though the downside is that it requires rendering and compositing the scene into textures twice. Also, it doesn't work on triangles from the same object (as we can't render to the depth texture while reading it for existing depth).
-- A more advanced / accurate depth buffer
-- Writing depth through some other means than vertex colors for precisionThis is fine for now, I think.
-- Depth testing within the same object - I'm unsure if I will be able to implement this.
-- Offscreen Rendering
-- Mesh merging - Meshes can be merged together to lessen individual object draw calls.
-- Render batching - We can avoid calling Image.DrawTriangles between objects if they share properties (blend mode, material, etc) and it's not too many triangles to push before flushing to the GPU. Perhaps these Materials can have a flag that you can toggle to enable this behavior? (EDIT: This has been partially added by dynamic batching of Models.)
-- Texture wrapping (will require rendering with shaders) - This is kind of implemented, but I don't believe it's been implemented for alpha clip materials.
-- Draw triangles shapes in 3D space through a function (could be useful for 3D lines, for example)
-- 3D Text (2D text, rendered on an appropriately-sized 3D plane)
-- -- Typewriter effect
-- -- Customizeable cursor
-- -- Horizontal alignment
-- -- Vertical alignment
-- -- Vertical Scrolling
-- -- Replace style setting system with dedicated Style object, with a function to flush various style changes to batch and update the Text texture all at once?
-- -- Outlines
-- -- Shadows
-- -- Gradients
-- -- -- Other patterns?
-- -- Parsing text for per-letter effects (this would probably require rendering the glyphs from a font to individual images to render; could also involve shaders?)
-- -- -- Per-letter colors
-- -- -- Bold
-- -- -- Italics
-- -- -- Strikethrough
-- -- -- Letters fading in or out, flickering?
-- -- -- Letters changing to other glyphs randomly
-- Perspective-corrected texturing (currently it's affine, see Wikipedia)
-- Automatic triangle / mesh subdivision depending on distance
-- Automatic level of detail
-- Manual level of detail (ability to render a model using various meshes in stages); note that these stages should be accessible at runtime to allow cloning meshes, for example
Culling
-- Backface culling
-- Frustum culling
-- Far triangle culling
-- Triangle clipping to view (this isn't implemented, but not having it doesn't seem to be too much of a problem for now)
-- Sectors - The general idea is that the camera can be set up to only render sectors that it's in / neighboring (up to a customizeable depth)
-- -- Some method to have objects appear in multiple Sectors, but not others?
-- Ability to use screen coordinates instead of just UV texturing (useful for repeating patterns)
-- Replace opaque transparency mode with just automatic transparency mode? I feel like there might be a reason to have opaque separate, but I can't imagine a normal situation where you'd want it when you could just go with auto + setting alpha to 1
-- Mesh swapping during animations, primarily for 2D skeletal animation? (Can be worked around using bones.)
Scenes
-- Fog
-- A node or scenegraph for parenting and simple visibility culling
-- Ambient vertex coloring
GLTF / GLB model loading
-- Vertex colors loading
-- Multiple vertex color channels
-- UV map loading
-- Normal loading
-- Transform / full scene loading
-- Animation loading
-- Camera loading
-- Loading world color in as ambient lighting
-- Separate .bin loading
-- Support for multiple scenes in a single Blend file (was broken due to GLTF exporter changes; working again in Blender 3.3)
Blender Add-on
-- Export 3D view camera to Scenes for quick iteration
-- Object-level color checkbox
-- Object-level shadeless checkbox?
-- Custom mesh attribute to assign values to vertices, allowing you to, say, "mark" vertices
-- Export GLTF on save / on command via button
-- Bounds node creation
-- Game property export (less clunky version of Blender's vanilla custom properties)
-- Collection / group substitution
-- -- Overwriting properties through collection instance objects (it would be nice to do this cleanly with a nice UI, but just hamfisting it is fine for now)
-- -- Collection instances instantiate their objects in the same location in the tree
-- Optional camera size export
-- Linking collections from external files
-- Material data export
-- Option to pack textures or leave them as a path
-- Path / 3D Curve support
-- Grid support (for pathfinding / linking 3D points together)
-- -- Adding costs to pathfinding (should be as simple as adding a cost and currentcost to each GridPoint, then sorting the points to check by cost when pathfinding, then reduce all costs greater than 1 by 1 ) (7/5/23, SolarLune: This works currently, but the pathfinding is still a bit wonky, so it should be looked at again)
-- Toggleable option for drawing game property status to screen for each object using the gpu and blf modules
-- Game properties should be an ordered slice, rather than a map of property name to property values. (5/22/23, SolarLune: should it be?)
-- Consistency between Tetra3D material settings and Blender viewport (so modifying the options in the Tetra3D material panel alters the relevant options in a default material to not mess with it; maybe the material settings should even be wholly disabled for this purpose? It would be great if the models looked the way you'd expect)
-- Components (This would also require meta-programming; it'd be nice if components could have elements that were adjustable in Blender. Maybe a "game object" can have a dynamically-written "Components" struct, with space for one of each kind of component, for simplicity (i.e. one physics controller, one gameplay controller, one animation component, etc). There doesn't need to be ways to add or remove components from an object, and components can have any of OnInit, OnAdd, OnRemove, or Update functions to be considered components).
DAE model loading
-- Vertex colors loading
-- UV map loading
-- Normal loading
-- Transform / full scene loading
Lighting
-- Smooth shading
-- Ambient lights
-- Point lights
-- Directional lights
-- Cube (AABB volume) lights
-- Lighting Groups
-- Ability to bake lighting to vertex colors
-- Ability to bake ambient occlusion to vertex colors
-- Specular lighting (shininess)
-- Lighting Probes - general idea is to be able to specify a space that has basic (optionally continuously updated) AO and lighting information, so standing a character in this spot makes him greener, that spot redder, that spot darker because he's in the shadows, etc.
-- Lightmaps - might be possible with being able to use multiple textures at the same time now?
-- Baking AO and lighting into vertex colors? from Blender? It's possible to do already using Cycles, but not very easy or simple.
-- Take into account view normal (seems most useful for seeing a dark side if looking at a non-backface-culled triangle that is lit) - This is now done for point lights, but not sun lights
-- Per-fragment lighting (by pushing it to the GPU, it would be more efficient and look better, of course)
Particles
-- Basic particle system support
-- Fix layering issue when rendering a particle system underneath another one (visible in the Particles example)
Shaders
-- Custom fragment shaders
-- Normal rendering (useful for, say, screen-space shaders)
Collision Testing
-- Normal reporting
-- Slope reporting
-- Contact point reporting
-- Varying collision shapes
-- Checking multiple collisions at the same time
-- Composing collision shapes out of multiple sub-shapes (this can be done by simply creating them, parenting them to some node, and then testing against that node)
-- Bounding / Broadphase collision checking
Collision Type
Sphere
AABB
Triangle
Capsule
Sphere
✅
✅
✅
✅
AABB
✅
✅
⛔ (buggy)
✅
Triangle
✅
⛔ (buggy)
⛔ (buggy)
✅
Capsule
✅
✅
✅
⛔ (buggy)
Ray
✅
✅
✅
✅
-- An actual collision system?
3D Sound (adjusting panning of sound sources based on 3D location?)
Optimization
-- It might be possible to not have to write depth manually (5/22/23, SolarLune: Not sure what past me meant by this)
-- Minimize texture-swapping - should be possible to do now that Kage shaders can handle images of multiple sizes.
-- Make NodeFilters work lazily, rather than gathering all nodes in the filter at once
-- Reusing vertex indices for adjacent triangles
-- Multithreading (particularly for vertex transformations)
And many other articles that I've forgotten to note down
... For sharing the information and code to make this possible; I would definitely have never been able to create this otherwise.
FAQs
Unknown package
Package last updated on 06 Feb 2024
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.