Dev Log #1 – New Horizons

New Horizons

Embarking on new adventures is always exciting. Earlier this year, I started exploring some new horizons by learning Unity and going through some tutorials. Unity and YouTube channels (such as Turbo Makes Games and Code Monkey) both have great material for learning how to make games, and it’s been fun watching and seeing how game developers build games.

Today, I started keeping a dev log to chart my own game development journey. These logs will likely be fairly technical, but I am hoping they will still interest a broader audience.

Game Engines

New Horizons: Waving Character
Hello, world!

I chose Unity over other game engines such as Unreal Engine or Godot because Unity seems to have a lower barrier to entry and more tooling for solo developers to get started quickly.

Unreal Engine seems to be the engine of choice for large game studios making AAA titles, but it seems daunting for a solo dev who is just beginning their game dev journey.

Godot seems like it has a lot of promise, and I’m certain it will only improve with time. For now, though, it does not seem to have as much support for cross platform VR builds as Unity does.

ECS (DOTS) initially caught my attention, but struck me as still fluid and incomplete. Tutorials that were less than a year old were already obsolete, and there was no obvious way to do certain things (e.g., animations, UI, etc.) without a hybrid approach anyway.

My goal is to make a VR game, and there don’t seem to be any great DOTS-compatible VR frameworks or controllers out there (and I’m not too keen on building my own). I have decided to just stick with regular Game Objects for now, but I will keep an eye on this technology and see if I can incorporate it in the future.

Keeping on with the theme of new horizons, I am still quite new to game development in general, let alone VR or Unity specifically. There are many topics I still need to investigate and learn more about. I’m still just getting a sense of the breadth of this process, and I have barely gone into any depth yet. There are so many things to learn, and the more I learn, the more I realize I do not know. Of course, that is how everyone starts out with any new subject. Thankfully, my background in programming and web development means that some of the concepts are familiar.

VR and Character Controllers

Anyhow, with that background out of the way, let’s look at the new horizons we’ll be exploring today. I’m going to focus on a broad overview of what I have been playing around with and the types of challenges I am looking forward to overcoming in the near future.

I am still playing around with various VR frameworks. One of the significant benefits of Unity is access to the Unity Asset Store where developers can buy and sell pre-made assets. I have already purchased a few assets which should help kickstart the development of an initial player controller.

For a little background context, let’s review what a player controller actually is. In most video games, the player (you) controls some entity (the character). Inputs may vary depending on the platform, such as a mouse & keyboard for PCs or a controller for consoles.

VR is no exception. Most VR headsets come with two controllers (one for each hand) that you can wave around. The “player controller” is essentially a system in the game that takes the user’s input and translates that into character actions. For example, when you press the grip button on the left controller, the character’s left hand should close to symbolize making a fist. Additionally, depending on where the hand is in the virtual world, the character might pick up a nearby object.

The character controller is ultimately responsible for how immersive the ultimate gaming experience is. It is responsible for all forms of locomotion and interaction. In VR, specifically, the user interface (UI) is also handled in a significantly different way compared to traditional video games.

Normally, video game UIs are “overlays” that don’t exist in the game world (“world space”), but exist in “screen space”. In VR, this kind of approach can cause motion sickness. The solution to this is to place the user interface elements in the world (like a floating touch screen panel).

However, this approach comes with additional challenges, such as making sure that the UI elements don’t interfere or get blocked by in-world objects. For example, a tree or wall could get between you and the menu. In terms of the player interacting with the UI, there are generally two approaches: collisions and raycasts.

  • Collisions are where you have to actually poke the menu with your hand or fingers.
  • Raycasts project a line out from your hand like a virtual cursor to interact with UI elements.

Generally speaking, I personally prefer the raycast approach as it lets the user interact with the UI from a distance.

image of a raycast in action
A raycast pointer in action

 

A character blocks the UI
A character is in the way!

 

Anyhow, using premade assets or not, there is still a fair amount of learning and work to be done to get everything configured and working well together. I am currently trying to get a FinalIK-animated full body avatar on top of the Hexabody player controller.

I still have to figure out how to integrate Hurricane VR, Hexabody, FinalIK, Portals, etc. Furthermore, there are no guarantees that every premade asset will work nicely together, or meet performance goals in VR. It may well be that certain premade assets will have to be set aside for a different project.

I look forward to exploring these new horizons in greater depth in the near future. In the next dev log, I will discuss the technical architecture of a multiplayer game and some of the considerations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top