Learning by doing
I've always been an advocate that errors and fails make the humans wise. The more you try and research, the more you will learn. What are you waiting for?
Stealth Game made in Unreal Engine 4 as Degree Final Project
Unreal Engine 4
HIDEOUT is a Stealth-Tension Game made in Unreal Engine 4 by a multidisciplinary team formed by 8 programmers, 7 artists and 2 game designers. (10 months. September 2020 - June 2021)
In this project, I coordinated the programmers team, scheduling and reviewing tasks during the development to reach deadlines (aside of implementing many gameplay and sound features I will explain in next section).
Beyond the design of the first level layout and blocking, I have implemented many features during the project development process.
All interactables (object in the world with which the player can interact someway, like an obstacle to climb it or a button top press it and activate something)
are based in a single class called InteractableBase, which has all the base functionality that all the interactables share.
This system is based in inheritance, and each type of interactable override certain functionalities to fit the base
approach to its own needs.
Let's explain what do this kind of objects do. An interactable has its Tick disables, until the player's detection sphere collides with him. Once this happens
the objects asks for an icon and saves its reference. On the Tick (now activated) we check if there is a wall or a blocking object between the interactable and the player,
if so, it can not show its icon and let the player interact (this is the case when the interactable is behind a wall, for example). After that, the function ShouldIShowInfo
is called, which brings the posibility of customizing a condition for the object to show its icon (in the case of obstacles, you have to be facing the wall to be able to climb,
but you can not climb if you are on a corner, so the obstacle will be responsible of overriding this function and bring that additional check).
If those conditions passed now we can call ShowInfo to show the saved icon on the screen, and to achieve this we have to call a function that projects 3D coordinates to
screen coordinates (we are talking about how the icons work in the next section). Now that it is showing its icon, assuming it is on range an all the conditions that it has we need
the player's input to receive the signal and do something. In the case of PickUps, when we receive the signal we will add ourselves to the player's inventory (Instigator of the action).
Icons are created and initialized as UCustomImages, that have an extra bool called bIsTaken_, that will determine if it is free to use (when someone asks for an icon) or not. When
any object ask for an icon, we return the first that is free and mark it as taken.
We have a pool of this special blank images created at the beginning, so we have a pool already prepared. They are blank, so when a class asks for an icon, it is responsible of put on it the image
that it needs (thing that is directly related to the input device we are using, to show the button 'E' in the case of interactables, for example). How do I handle that?
I have an special event on the main character controller to send a message to everyone who is listening when the input device changed. I have a function called IsAnyKeyPressed so when any key
is pressed I check if it comes from a keyboard or from a gamepad, comparing it to my previous record of which input device is the player currently using, if it differs, the input device has
changed and I have to update that precious mentioned record and broadcast an event that warns listeners of this and says if the device is now a keyboard or not.
When a class initializes an icon, it calls a function of the Game Instance called GetIconFromAction. This function has an "action" as a parameter, which can be Interact, Jump or similars,
so the function returns the texture of the desired icon taking into account which input device is now the player using, that way we have the icon up to day and matches the device. This function is called from every
listener when the input device changes, so the function will return the icon of the new device also. This allows hotswap of input device and icons at any moment without artifacts.
Interactable that activates something activable. (Powers a button here)
Interactable that activates something activable. (A door here)
Interactable pick up. Material to craft.
Interactable collectable. Like a pick up but triggering a special pop up.
Obstacle. (Tall enough to climb)
Obstacle. (Small, vault)
Hidespot. (Vertical, locker)
Hidespot. (Horizontal, bed)
Aside of the system itself, I have implemented all their functionalities. Also, activables like the Garage Door seen above in the second gif was also implemented by me.
The camera of a game is extremmely important, as it is the eyes of the player. It has to transmit the sensations we are looking for:
frenetism when running, protection when crouching and similars. For that, the camera in our game is always moving towards a target position (unless it
is already in placed in the target position). To move the camera smoothly to somewhere we just have to move the target position and that's all, camera will make a transition
to that location. The same occurs with the Field of View.
This is what happens when player does some actions:
In our game, when we want to highlight a building or something similar to guide the player, we trigger a Camera
Landmark Focus so the player hears a characteristic sound and sees a button to press in the bottom-left corner.
If the player presses the button, the camera will fix the desired landmark position as the place to look. Player is able to move
while this is happening, but slower, and camera FOV is lowered to transmit the "focus on that thing" sensation.
This effect is voluntary, so if hte player do not press anything, you won't be forced to look there. The effect
ends past certain time or exiting the trigger volume for better control purposes.
In a game like ours we need a way to play monologues of the main character and voices in general, and for a better understanding we needed subtitles.
A Subtitle Block Handler is an object that has all the necessary info of what is needed in a set of subtitles (a monologue).
To create one we have to make a Blueprint inheriting of our SubtitleBlockHandler class and fill the vector of
SubtitleLines with the pertinent information.
Each SubtitleLine has an Audio Source, a string for the text and a float value for the line time.
The system receives a SubtitleBlockHandler and starts playing it line by line. If another subtitle is thrown to the
system to be played, it will wait until the last line of the current one has ended, and then will start.
There are also events at the end of each block to let designers attach a mission update, a door opening or something scripted in the
Level Script.
When we start a level an interactive loading screen is shown, and we can rotate the character model (randomly selected
from all the characters in the game, including enemies), while the game loads in the background so you do not get bored
watching a circle rotating.
When everything is loaded, the loading circle gif changes to a line that says "Press [button]
to continue", this [button] changes with the input device thanks to the before mentioned icons system.
Player input is blocked until that momento so there are no problems when spawning and to avoid potential artifacts.
Sound is pretty important in this kind of games, and due to that the sound system has to be versatile and robust. We use both
.wav files and FMOD Events to manage sounds.
We can separate Sound Effects from the BSO because they work different:
Sound effects work as request in a database in our project. We have a library that has all the sounds storaged,
and it is accesible from the Game Instance. To play, pause or stop a sound, you have to call the function
placed in the Game Instance passing as a parameter its unique identifier (name) and, optionally, a location
to play it in spatialized.
To storage a sound we must provide a name (ID), an audio source (AudioWave or FMOD Event) and a channel to play it.
The channel is required to control its volume through options (sound effect, music, dialogues, etc).
The BSO is FMOD-based and that is important because we have to be able to modify it dynamically. We have a layering system,
and we activate and deactivate layers in the FMOD Event depending on the intensity of the situation that the player is facing.
An example is: if player starts being chased by enemies this parameter changes and layers with percusion and bass boost are faded in.
This is very useful, as the sound affects a lot the player and can help making him nervious or relaxed, depending on what we want.
We use this parametrization in other sound areas like the player's heartbeat, which becomes louder and faster when he is in danger and
softer when he has no trouble near.