SplashDash


Download:
Windows | Mac OSX | Linux

SplashDash is a 3D, freerunning, score attack game developed over twelve weeks by four final year programmers and seven final year 3D artists from the University of Derby, for the Game Development module.

Robots obsessed with efficiency and order have taken control, rendering our world grey and tedious. As a teenager gifted with the touch of colour, you have the power to bring back joy and excitement, and the agility to evade our robot rulers’ mechanisms.

Dash around the city, painting as many walls, rooftops, people and plants, as fast as you can to inspire a revolution, but be careful of Robo Corp. cameras, patrol bots, electric fences. Compete with your friends in local two-player split-screen (joypads required).

Game Design and Mechanics
For the Game Development module we were given the theme of “Chaotic Pacifism”. Working with restrictions such as a theme can be a good way to boost creativity, and while other teams decided to go with more conventional ideas – creating horror games or combat mechanics that merely disabled opponents – we decided to go for something a bit more unusual.

In SplashDash the player paints buildings, props, and pedestrians with his current colour simply by running over or near to them. Buildings are split into sections so that they require more contact to paint, but will fill in once you hit a certain percentage. The original idea was to actually paint colour onto the surfaces dynamically, possibly by drawing onto the textures, but we quickly decided to simplify this for time constraints’ sake. As the player paints things they build up a combo which moves their colour along a preset range, and causes them to collect more points for each object painted.

The player is also faced with obstacles which will actively and passively hinder painting. These include security cameras, which will lock onto the player and spawn a cleaning bot after a short delay, and patrol bots, which will patrol preset paths, but pursue and attempt to stun the player if they spot him. Once released, cleaning bots will unpaint a set number of objects before returning home. The player can temporarily blind cameras and patrol bots, and send cleaning bots home by hitting them with a ranged paintball attack.

To give them the mobility required to reach the rooftops and evade the watchful lenses of our robot rulers, the player is given a number of freerunning, or “parkour”, moves. Ledgegrabs enable the player to pull themselves up to platforms that are just a little too high or far away to reach with a jump, wallruns let them extend jumps in situations where there are suitable surfaces available, vertical wallruns give them extra height over jumps, and wall rebounds are a quick way to turn around and zig-zag up walls in a narrow alleyway.

splashdash-buildingcomplete-002
splashdash-tutorial-rebound-001

Unity
Unity was selected for development  for its advanced graphical IDE, and since two of our programmers (myself not included) already had a year or so’s experience with it. The artists’ preference was UDK, but we felt that its scripting workflow was counter-productive, and we could accommodate their needs in Unity with regards to shaders, lighting, etc.

The remainder of this page recounts my major contributions to the project:

CharacterController
After some experimentation I chose to use a characterController for player characters, as it was simpler to re-implement the basic physics that were missing from it than to try to tame the undesirably realistic behaviour of a rigidbody.

Freerunning Implementation
Since player movements and controls were my main responsibility for this project, the freerunning implementation also fell to me. I had never previously implemented such complex 3D movement mechanics, so this was a great challenge, and time constraints meant that I did not have time for a huge amount of research, refinement, or optimisation. I opted for a system which depended purely on the geometry of the environment (we used simplified collision meshes for detailed building faces, etc) so that we would not have to manually place hints and triggers, as suggested by some sources. A good deal of raycasts are thrown around to analyse the relevant nearby geometry for different moves; the algorithms have been optimised somewhat, but they remain a wall of vector maths and I am certain there is still much room for improvement.

Perhaps the most complicated manoeuvre is the ledgegrab, which requires the detection of a flat surface, with enough space to stand on, in the right position, not obstructed by other geometry.

This is handled by first casting several rays from set positions in front of the player, at a certain height, downwards for some distance. If a ray hits a surface then we can retrieve it’s normal and position. Steeply sloped surfaces are rejected by checking the normal’s y value against a set minimum. We then cast rays from the player’s left and right, at the height of the surface, towards it, and one ray back upwards for the height of the player, to check for obstructions. Finally, a ray is cast from the player towards the surface at a slightly lower position, to identify the edge of a platform, which may be rejected if its normal is off at a harsh angle.

Using the information garnered from a succesful ledgegrab initiation, we can snap the player to the edge, set his height for the most appropriate animation, and set a point on the surface to climb up into. The player will remain locked to the wall while the animation plays, and then “teleport” up into the correct position. During the animation the camera interpolates to the new position so that everything looks seamless to the player (barring some occasional glitches due to limitations in Unity’s Mecanim animator system).

splashdash-guides-ledgegrab-003

Debug lines demonstrating distance for wallrun (circle), ledgegrab & rebound (yellow line) detection, including height limitations of ledgegrabs. Snap-to-wall offsets also illustrated in blue.

The implementation of vertical and horizontal wallruns is much simpler, only requiring the detection of relatively vertical, flat surfaces at the right angle relative to the player. The player is snapped to the wall and desired facing during these manoeuvres, and a similar check to the initial one is performed each frame to check that we can continue.

The more complicated of the two is the horizontal wallrun. The procedure is run twice – once each for left and right. A ray is cast sideways to detect suitable surfaces within range. Surface suitability is determined by its normal. Next, a ray is cast to simulate [arbitrary] seconds’ worth of the player’s velocity. If  this ray hits a surface, or another ray cast sideways from its terminating point does, we compare the position and normal of the hit to the first, so as to reject non-flat(ish) surfaces. There is some tolerance for differences in angle and distance to allow for cool (though unrealistic) actions such as wallrunning on curves.

Similarly, two rays are cast forwards for vertical wallruns, to check the normal correspondence of points at different heights on a wall. Checks for ledge grabs are also allowed during vertical wallruns

splashdash-tutorial-ledgegrab-002 splashdash-tutorial-wallrun-002

Pedestrians
At about the half-way mark the city was feeling a little dead, so we threw some ideas around for how to liven the place up. One of these ideas was to add mobile paintable objects, such as pedestrians whose animations change to indicate their happiness when painted.

My implementation for these was fairly quick-and-dirty: pedestrian navigation node prefabs are placed in the editor and linked together by dragging and dropping references from the hierarchy to the inspector. Some of these nodes can then be specified as starting points. A pedestrian will be spawned at each starting point when the level is loaded, and then traverse connected nodes at random.

To ensure consistent behaviour every time a level is run, each pedestrian has its own random number generator which receives a manually specified seed from its starting point. Pedestrians also have a bias to selecting directions towards their front:

// Randomly select a rotation amount between 0.0 and 1.0 radians (up to 180 degrees)
// This is then squared to create a bias for moving forwards over turning
float rotation = (float)(rand.NextDouble());
rotation = Mathf.Pow(rotation, PATHING_TURN_STRAIGHTNESS_BIAS);
rotation = rotation * Mathf.PI * dir;

splashdash-pedestrian-sad-001
splashdash-pedestrian-happy-001

Patrol Bots
The patrol bot AI was originally implemented by another member of the team, but I was responsible for a late recode to refine their behaviour. Unity’s navigation meshes are unsuitable for flying agents, and it was too late in the project to viably implement my own, so I just created a tidier, enhanced implementation of the original idea.

In the patrol state, patrol bots follow a series of nodes, similar to pedestrians, but with no randomness whatsoever. If the player strays within a wide, short cone of vision, represented as a red spotlight, the robot will begin pursuing them.

When pursuit begins, a point in 3D space (the robot’s current position) is pushed onto the “past” stack. At regular intervals (every frame, actually)  the robot capsulecasts its collision volume towards the top item on the past stack. If the point is not occluded, we update the lastPointSawPastFrom variable to the robot’s current position. If the point is occluded, we push lastPointSawPastFrom onto the past stack. This maintains a clear path for the robot to trace back to its designated patrol route once the pursuit is over.

During pursuit, the robot raycasts towards the player to determine whether he is occluded by terrain. If visible, the destination “objective” is updated to the player’s position. Otherwise, if the player was visible last frame, we assume the robot could have guessed where the player was going, and set a second destination “future” point, offset from the objective by [arbitrary] seconds worth of the player’s velocity.

If, during pursuit, the player stays within a certain range of the robot, the robot will stun him temporarily using lasers, and then disengage. The robot will also disengage if it loses track of the player, either because it was blinded by a paintball, or could not see him at its last future point.

splashdash-gizmos-002
splashdash-bot-laser-003

Tutorial
A number of additional scripts had to be created to facilitate a good tutorial.

The most significant of these were the GUI scripts to present tutorial dialogue boxes to the player, and separately to two players in splitscreen mode. This was achieved by rendering textures and text from the OnGUI Unity callback function using preset (inspector-exposed) offset, position and scale variables, and simply accumulating offsets as we drew each item. The code to draw this for each player was put into a loop which would run twice (if necessary), using different offsets for each player.

These dialogues are triggered by the player walking into trigger colliders represented visually as floating, rotating question marks. The triggers contain inspector-exposed lists for their pages and each page’s text, which are passed to the tutorial GUI scripts from the OnTriggerEnter callback. The players’ controls also had to be disabled while dialogues were visible.

Finally (though actually implemented first), we needed respawn points, checkpoint triggers linking to them, and resetter triggers to boot you back to the last checkpoint. These were also used in the main level, where checkpoint volumes were labelled “revisitable”, and placed along the coastline, with a resetter volume just above the water.

splashdash-tutorial-bot-cleaning-001splashdash-dialoguetrigger-001 splashdash-gizmos-001

One thought on “SplashDash

  1. Pingback: SplashDash | Confect's Codex

Your thoughts:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s