We build on the shoulders of giants. Here's everything we use, reference, or discuss — open source and commercial alike.
We build on Unreal Engine 5's Hardware Lumen pipeline, which routes ray tracing through Microsoft DXR and resolves on NVIDIA OptiX™ for GPU-accelerated denoising. RTX™ hardware enables reflections, shadows, and global illumination that respond to the scene in real time — not approximations baked at build time. Every light in our environments earns its place because the engine can afford to calculate it.
NVIDIA RTX Developer Docs ↗We ship DLSS as an opt-in quality setting via the native Unreal Engine 5 DLSS plugin. Players on RTX™-class hardware can enable DLSS 3 to recover frame budget without trading image quality — a meaningful unlock for players who want our visual targets at higher refresh rates. We treat it as a player-controlled feature, not a performance crutch.
NVIDIA DLSS Developer Docs ↗We are currently building a single-NPC proof-of-concept using NVIDIA ACE's NeMo™ (language model backbone), Riva™ (speech recognition and synthesis), and Audio2Face™ (real-time facial animation from audio). The goal is a character that processes spoken player input and responds with contextually coherent voice and expression — no branching dialogue, no canned lines. We will publish what we learn from this demo before committing to broader integration.
NVIDIA ACE Developer Docs ↗Unreal Engine 5 ships with Chaos as its default physics solver, and we use it for general simulation. We bring PhysX™ 5 in specifically where determinism matters — physics-dependent gameplay systems, networked simulation, and any sequence where two clients need to arrive at the same result from the same input. PhysX™ 5 is our precision layer, not our replacement for Chaos.
NVIDIA PhysX Developer Docs ↗Our current inference stack runs on Ollama and llama.cpp over CUDA, which covers our local AI workloads during development. We are evaluating TensorRT™ as the migration path for latency-sensitive inference — specifically for game-side AI systems where response time is measured in milliseconds, not seconds. TensorRT™ migration is planned as those systems approach production requirements.
NVIDIA TensorRT Developer Docs ↗CUDA Toolkit is the base layer of our GPU compute stack — it is what makes our NVIDIA hardware accessible to the software above it. Every AI inference workload, physics simulation, and rendering pipeline we run on GPU depends on CUDA. We do not build directly to CUDA in most cases; we build to tools that require it, which is why getting the Toolkit right is a prerequisite, not an afterthought.
NVIDIA CUDA Toolkit Docs ↗We use Nsight™ Graphics as our primary GPU performance tool for Unreal Engine 5 builds. When a frame takes longer than it should, Nsight™ shows us which draw calls, shader passes, or memory operations are responsible — not which system we suspect. It is our standard instrument for optimization passes before any external milestone or review.
NVIDIA Nsight Graphics Docs ↗We are tracking NVIDIA Omniverse™ as the foundation for a future USD-based asset pipeline — one that would allow our tools, engines, and external collaborators to work from a shared scene description format rather than repeated exports and conversions. This is a planned architectural direction, not a current workflow. We will integrate Omniverse™ when the pipeline around it is ready to support it.
NVIDIA Omniverse Developer Docs ↗We write honestly about the tools we use, including commercial software. Some tools listed here — particularly in the game dev DCC pipeline — are ones we mention in technical content without a formal relationship.
If you represent a tool we mention and want to discuss partnership, co-marketing, or integration: info@monstergaming.ai