top of page

The tech behind the game Moncage

Updated: Jan 30, 2023


Moncage is an indie puzzle game that plays with the perspective.


I think this gameplay looks super cool, and immediately decided to recreate the core mechanics in Unity. Here is how it went.


Render Texture


Rotating the cube, it seems like there are different scenes hidden inside, looking from different sides of the cube.


Note how the scene can be only seen from one side. that immediately occurs to be a quad and a Render Texture upon it. We'll also need a camera orbiting the cube when pressing arrow keys, but that's the easier part.




Now comes the challenge, how do we create the illusion that we are looking into something inside the box and our render texture works as a "window" of that world inside?


Camera Movement

It's the camera movement that does the magic. If the camera outside the box rotates, what's inside should also rotate, so the movement is aligned, right?


Let's add the same orbiting camera to our inside scene and attach the same control and see what we get.

It works ok....but we are getting very severe distortion glazing from the edge.



So apparently, we are still looking at a very, very flat texture rather than a 3D world inside. What is happening here?


Let's take a look at the camera movement:


This is the top-down view of our scene. The green line indicates the face with the render texture we are looking at. Think of it as a window to an inner world. When we are rotating to the other side, the visible part of our inner world should be around the blue arrow we are drawing, and the line connecting the camera and the cube center is actually outside of the view.


So it makes no sense to have an orbiting camera for the inner world too. It should be a static camera that rotates.


Works like a charm! We actually took a shortcut here because we only want things to look convincing and to sell the illusion. We are actually doing this:

This is still not accurate, but we will worry about that later.



Lastly, for camera movement, note that the Y rotation is always in the world space while the X rotation is in local space. And we'll also want to clamp the X rotation.


(I also switched to smoother mouse control later.)


Ray Casting

Let's talk about mouse control - what if we want to click on something in the "inner world"? How do we do a " game within a game"?


I want to dive in a little bit about how unity does raycasting. In Unity, we often use

camera.ScreenPointToRay(Input.mousePosition)

to get the mouse position to select a game object. People may visualize it being a ray casting from the camera, but that's actually not the case. Unity's documentation writes this under the ScreenPointToRay method:

The Ray originates from the near clipping plane rather than the Camera’s transform.position point.

And if you use a debug line to visualize the ray, you'll see the starting point is not fixated - it moves on the near clipping plane when the mouse moves. If you are familiar with perspective projection, this actually makes a lot of sense:










The ray we are casting is actually connecting the point on the near clipping plane and the far clipping plane - but they are the same point on the screen.


Right here we need to do two raycasting, one in the outer world doing ScreenPointToRay, and one in the inner world doing ViewportPointToRay.


The end result looks like this right now:



Shader

This looks cool and all but ... this is not visually correct. It looks especially bad when you try to rotate your camera and look at the walls ... they look afar. And if we are trying to achieve this, where the inner world actually looks as if it is inside the box:

It will spoil the magic.


First of all, to get the correct visual the player should see, we need the inner and outer cameras to be at the same relative position.

It is distorted, as it should be. But we know that the part of the inner world we want to see shares the same screen position as the window we are looking at in the outer world. So we can clip that part of our inner world and map that onto our render texture.

v2f vert (appdata v)
{
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.screenPos = ComputeScreenPos(o.vertex);
    return o;
}

Here we'll keep the screen position rather than UV.

fixed4 frag(v2f i) : SV_Target
{
    float2 screenSpaceUV = i.screenPos.xy / i.screenPos.w;
    fixed4 col = tex2D(_MainTex, screenSpaceUV);
    return col;
}

We'll still need to do perspective division ourselves. Then map it on.

Nice! But that messes up our raycasting because we basically clipped down part of the inner world screen. We will use screen position rather than UV coordinate this time.

There you have it!















44 views0 comments

Recent Posts

See All
bottom of page