blog:2020:0707_nervcode_virtual_keyboard_display

# NervCode: Initial Virtual Keyboard display

So today we continue with our NervCode project implementation: so far we have only implemented some minimal support to display our “function objects”, with default names such as “function1”, “function2”. But one thing we will need very quickly is the support to accept more user inputs, such as keyboard inputs.

Yet, one of the key idea in this project is to get rid of the standard inputs mechanism with “keyboard and mouse” and try to provide an alternative solution that would be more “mobile friendly”. And thus, what I have in mind concerning this point is to provide some kind of “Virtual keyboard” around the user location when needed, so that we could keep typing characters even if there is no keyboard available. Let's see how we could implement this…

• The first [probably very naive] idea I have is to try to render each “key” as a separated object: each key could be represented as a single character on the virtual keyboard with a background shape behind it.
• So here I will try to build that background element procedurally.
I found this unity package which seem to be very close to what I want to do [ideally], but of course, I have no intention to pay for/use any external package here: I want to learn how to build everything by myself !
• I created a ShapeManager class as follow:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ShapeTraits
{
public float width = 1.0f;
public float height = 1.0f;
public Material mat = null;

// Storage for the external border radius in the order
// bottom_left, top_left, top_right, bottom_right

// Number of intermediate points for each corner:
public int numCornerPoints = 3;
};

public class ShapeManager
{
public GameObject createShapeObject(Transform parent, ShapeTraits traits)
{
// We create a new game object:
GameObject obj = new GameObject("Shape");

rdr.material = traits.mat;

obj.transform.position = new Vector3(0.0f, 0.0f, 0.0f);
obj.transform.parent = parent;

// Now we should create the actual shape mesh:
filter.mesh = createMesh(traits);
return obj;
}

protected void computeNumVertices(ShapeTraits traits, ref int numVertices, ref int numIndices)
{
// We always have 4 vertices / 6 indices for the center part
numVertices = 4;
numIndices = 6;

int ncp = traits.numCornerPoints;

// For now, let's just make this super simple and always consider our radius is not 0.0:

// For each corner we have ncp + 2 additional vertices.
numVertices += 4*(ncp+2);

// For each corner we have ncp+1+2 triangles:
numIndices += 4*(ncp+3)*3;
}

protected Mesh createMesh(ShapeTraits traits)
{
// For now we draw a simple rectangle:
// so we need 4 vertices and 4 normals, and 2 triangles:

// Since we can have rounded borders, we may need to draw multiple "shape elements",
// so we should count how many vertices and triangles we will generate.
int numVertices = 0;
int numIndices = 0;

computeNumVertices(traits, ref numVertices, ref numIndices);
// Debug.Log("Should build shape with "+numVertices+" vertices and "+numIndices+" indices.");

Vector3[] vertices = new Vector3[numVertices];
Vector3[] normals = new Vector3[numVertices];

Vector2[] uvs = null;

int[] indices = new int[numIndices];

// Half width and half height:
float hw = traits.width/2.0f;
float hh = traits.height/2.0f;

// Update all the normals:
for(int i=0; i<numVertices; ++i) {
normals[i].Set(0.0f, 0.0f, -1.0f);
}

// Add the center points / indices:
vertices[0].Set(-hw+bl, -hh+bl, 0.0f); // bottom left point
vertices[1].Set(-hw+tl, hh-tl, 0.0f); // top left point
vertices[2].Set(hw-tr, hh-tr, 0.0f); // top right point
vertices[3].Set(hw-br, -hh+br, 0.0f); // bottom right point

indices[0] = 0;
indices[1] = 1;
indices[2] = 2;
indices[3] = 0;
indices[4] = 2;
indices[5] = 3;

int vpos = 4;
int ipos = 6;

int ncp = traits.numCornerPoints+2;

// Add the corner points starting with the bottom left corner:
for(int c=0;c<4;++c) {
Vector3 orig = vertices[c];
Vector3 dir = Quaternion.Euler(0, 0, c * -90.0f) * new Vector3(0.0f,-traits.borderRadius[c], 0.0f);

for(int i=0;i<ncp;++i)
{
vertices[vpos+i] = orig + Quaternion.Euler(0, 0, i * -90.0f/(ncp-1)) * dir;

if(i>0) {
// define the triangles:
indices[ipos++] = c;
indices[ipos++] = vpos+i-1;
indices[ipos++] = vpos+i;
}
}

vpos += ncp;

// Define the rectangular border:
indices[ipos++] = c;
indices[ipos++] = vpos-1;
indices[ipos++] = (c+1)%4;
indices[ipos++] = vpos-1;
indices[ipos++] = c==3 ? 4 : vpos;
indices[ipos++] = (c+1)%4;
}

return setupMesh(vertices, normals, indices, uvs);
}

protected Mesh setupMesh(Vector3[] vertices, Vector3[] normals, int[] indices, Vector2[] uvs)
{
Mesh mesh = new Mesh();

mesh.vertices = vertices;
mesh.triangles = indices;
mesh.uv = uvs;
mesh.normals = normals;

return mesh;
}

}

• And then I use that class in another VirtualKeyboard MonoBehavior to create a simple test shape in its Start method:
    // Start is called before the first frame update
void Start()
{
// On start we should create a Shape Manager
shapeManager = new ShapeManager();

// Then we create a child object attached to this transform:
ShapeTraits traits = new ShapeTraits();
traits.width = 5.0f;
traits.height = 3.0f;

traits.mat.SetColor("_Color", new Color(1.0f,0.0f,0.0f,1.0f));

GameObject obj = shapeManager.createShapeObject(this.transform, traits);
}


And here is the result I can acheive with that:

⇒ Not too bad for a first test, isn't it ?

Next step was to add support to draw just an “outline” with a given “border width” instead of a filled shape.

So I added the borderWidth member in my ShapeTraits class:

public class ShapeTraits
{
public float width = 1.0f;
public float height = 1.0f;
public Material mat = null;

// Storage for the external border radius in the order
// bottom_left, top_left, top_right, bottom_right

public float borderWidth = -1.0f;

// Number of intermediate points for each corner:
public int numCornerPoints = 3;
};

And then implemented a dedicated function createMeshOutline that would be called instead of the default implementation if the borderWidth is set to a positive value:

    protected Mesh createMeshOutline(ShapeTraits traits)
{
int ncp = traits.numCornerPoints;
int numVertices = 4*(ncp+2) * 2;

// For each 2 vertices except the last 2, we create 2 triangles.
// But since we don't add the last 2 points to close the shape, we
// should not count them here:
int numIndices = (numVertices)*3;

Vector3[] vertices = new Vector3[numVertices];
Vector3[] normals = new Vector3[numVertices];

Vector2[] uvs = null;

int[] indices = new int[numIndices];

// Half width and half height:
float hw = traits.width/2.0f;
float hh = traits.height/2.0f;

float borderW = traits.borderWidth;

// Update all the normals:
for(int i=0; i<numVertices; ++i) {
normals[i].Set(0.0f, 0.0f, -1.0f);
}

int vpos = 0;
int ipos = 0;

ncp = traits.numCornerPoints+2;

Vector3[] bases = new Vector3[4];
bases[0].Set(-hw+bl, -hh+bl, 0.0f); // bottom left point
bases[1].Set(-hw+tl, hh-tl, 0.0f); // top left point
bases[2].Set(hw-tr, hh-tr, 0.0f); // top right point
bases[3].Set(hw-br, -hh+br, 0.0f); // bottom right point

// Add the corner points starting with the bottom left corner:
for(int c=0;c<4;++c) {
Vector3 orig = bases[c];

Vector3 dir = Quaternion.Euler(0, 0, c * -90.0f) * new Vector3(0.0f,-traits.borderRadius[c], 0.0f);
Vector3 dir2 = Quaternion.Euler(0, 0, c * -90.0f) * new Vector3(0.0f,-traits.borderRadius[c]+borderW, 0.0f);

for(int i=0;i<ncp;++i)
{
vertices[vpos++] = orig + Quaternion.Euler(0, 0, i * -90.0f/(ncp-1)) * dir2;
vertices[vpos++] = orig + Quaternion.Euler(0, 0, i * -90.0f/(ncp-1)) * dir;

if(i>0) {
// define the triangles:
indices[ipos++] = vpos-4;
indices[ipos++] = vpos-3;
indices[ipos++] = vpos-2;
indices[ipos++] = vpos-2;
indices[ipos++] = vpos-3;
indices[ipos++] = vpos-1;
}
}

// We close the shape:
indices[ipos++] = vpos-2;
indices[ipos++] = vpos-1;
indices[ipos++] = c==3?0:vpos;
indices[ipos++] = c==3?0:vpos;
indices[ipos++] = vpos-1;
indices[ipos++] = c==3?1:vpos+1;
}

return setupMesh(vertices, normals, indices, uvs);
}

Now here is the kind of result I can observe if I set the borderWidth to 0.5f for instance:

When the border width becomes larger that the border radius we currently get an incorrect display, with a draw artefact “leaking” on the backface of the outline mesh as shown below:

So of course, this should be fixed. But I think the solution is easy: we should just ensure that the “inner radius” we use to define our outline is never negative (but in practice, since we start with a vector pointing downward on the Y axis, we rather check that the value is always negative):

for(int c=0;c<4;++c) {
Vector3 orig = bases[c];

Vector3 dir = Quaternion.Euler(0, 0, c * -90.0f) * new Vector3(0.0f,-traits.borderRadius[c], 0.0f);

Vector3 orig2 = orig;
Vector3 dir2 = Quaternion.Euler(0, 0, c * -90.0f) * new Vector3(0.0f,-traits.borderRadius[c]+borderW, 0.0f);

// Special handling in case the border width is bigger that the radius:

Vector3 diag = Quaternion.Euler(0, 0, c * -90.0f - 45.0f) * new Vector3(0.0f,-1.0f, 0.0f);
float diff = borderW - traits.borderRadius[c];

orig2 = orig - diag * Mathf.Sqrt(2*diff*diff);
dir2 = new Vector3(0.0f,0.0f,0.0f);
}

// ... more stuff here
}


And with this change the rendering is now correct (and with no artefact behind the object):

Good! with this issue fixed it's now time to move to the complete key rendering with text display.

Thinking about it, I'm pretty sure we could achieve similar results directly inside a shader (ie. when drawing a rectangle): it could certainly be interesting to investigate that path eventually.

To display a complete key, I extended my VirtualKeyboard class with a createKey function that will create the background as well as the text mesh, and attach the objects to our VirtualKeyboard parent:

    GameObject createKey(string sym, Vector3 pos, Vector3 forward)
{
// We create a background and a text mesh for that key:
GameObject obj = new GameObject("key_"+sym);

GameObject bg = shapeManager.createShapeObject(this.transform, keyBgTraits);
bg.transform.parent = obj.transform;
bg.transform.localPosition = new Vector3(0.0f,0.0f,0.0f);

// Now we create the text display:

GameObject txt = new GameObject("txt_"+sym);
txt.transform.parent = obj.transform;
txt.transform.localPosition = new Vector3(0.0f,0.0f,0.0f);

tmesh.text = sym;
tmesh.offsetZ = 0.01f;
tmesh.anchor = TextAnchor.MiddleCenter;
tmesh.characterSize = 0.00045f*keyHAngle;
tmesh.fontSize = 100;
obj.transform.parent = this.transform;
obj.transform.localPosition = pos;

Vector3 left = Vector3.Cross(forward, Vector3.up);
Vector3 up = Vector3.Cross(left, forward).normalized;
// obj.transform.localRotation = Quaternion.FromToRotation(new Vector3(0.0f,0.0f,-1.0f), forward);
obj.transform.localRotation = Quaternion.LookRotation(-forward, up);

return obj;
}

I had to do some tweaking on the text mesh characterSize to get a correct size on screen given the angle covered by a key, because I didn't find any bullet proof/simple enough mechanism to retrieve the size of the text in the 3D scene.

And here is my first complete key rendered:

But… why should we stop here rendering a single key, when we can render a full keyboard ? ! I just add to extend a bit the code around the call to createKey to support rendering multiple “rows” of keys, with multiple keys on each row: with this kind of code:

        // create all the keys:
int nrows = rowList.Count;
for(int r=0;r<nrows; ++r) {
// The vertical offset in degrees if given by:
float vangle = r*(keyHAngle/keyAspect + spaceAngle) * Mathf.Deg2Rad;
KeyNameList knl = rowList[r];
// Compute the complete horizontal angle coverage for that line:
// Note: we remove one key here to account for the center placement.
// And then we devide by 2:
int nkeys = knl.Count;
float hangle = (nkeys-1) * (keyHAngle+spaceAngle) * 0.5f * Mathf.Deg2Rad;

// Create each key at the correct position:
for(int k=0;k<nkeys;++k) {
float hang = -hangle + (keyHAngle+spaceAngle) * k * Mathf.Deg2Rad;

float cphi = Mathf.Cos(vangle);
float sphi = Mathf.Sin(vangle);
float ctheta = Mathf.Cos(hang);
float stheta = Mathf.Sin(hang);

Vector3 dir = new Vector3(cphi*stheta, -sphi, cphi*ctheta);
dir.Normalize();
createKey(knl[k], dir*keyDistance, -dir);
}
}

And here we go:

Nice ! This is finally starting to feel a bit satisfying! As show on the image above the keys are placed on a sphere and oriented towards the center of that sphere (where we also have the player camera)

Now of course, we don't always want to have the keyboard filling the screen so we should be able to show/hide it on a key press. Let's handle that.

I just found this nice page on the Xbox360 controller mapping in unity

Actually, for reference, here is the mapping you will find on the page mentioned just above:

I also found this other page with a script that can be used to discover the mapping for all kind of gamepads: could be useful at some point.
Actually, I eventually realized that the mapping mentioned above is incorrect: for an xbox one controller right stick axis we rather have Horizontal axis ⇔ 4th joystick axis and vertical axis ⇔ 5th joystick axis

Yet, as I mentioned in one of my previous posts on this project, I don't like the unity Input management window [ I mean… at all. Come on ? guys ?!…]. So this makes me wonder: is there maybe a way to define the mapping we want dynamically ? Let's see…

⇒ And actually, I feel there is no real need to define a specific input action for each of the button/axis we want to map: we can instead retrieve raw values directly from the InputManager if I understand correctly: I need to test that. So I added this kind of code:

        // Check if the hat up button is pressed:
float ax7 = Input.GetAxis("joystick 1 Axis 7");
if(ax7>0.5f)
{
Debug.Log("Hat up button pressed.");
}
else if(ax7<-0.5f)
{
Debug.Log("Hat down button pressed.");
}

… And well, hmmm, it just doesn't work I tried a lot of different possible name for the axis, but I always just receive a long list of exception from unity when trying to do a GetAxis() with those names… how could that be ? Nope… just no way to do it: as crazy as this may sound, you cannot retrieve the value of a real axis on a given joystick inside unity with the base “Input Manager” system only (I mean, “without any external module”: if I were to inject SDL2 in there, this would clearly not be a problem anymore of course!)

There was still this gamepad support I mentioned before, that is part of the “InputSystem” package. But I feel it might not be a terribly good idea to add another dependency at this level right now and the package doesn't seem to be complete anyway.

So… not much choice left: I should create a “Virtual axis” for each possible axis on my xboxone controller so that I could retrieve its value… Insane, but should work. Let's just go with it.

OK with that list of virtual input axis, the following code snippet is working as expected:

        // Check if the hat up button is pressed:
float ax7 = Input.GetAxis("Joy1Axis7");
if(ax7>0.5f)
{
Debug.Log("Hat up button pressed.");
}
else if(ax7<-0.5f)
{
Debug.Log("Hat down button pressed.");
}

Thinking about the default input management provided by Unity, I now really feel I should do something about it: I should build my own InputManager class to provide a better interface to handle the user inputs, because the default system really seems to be too limited.

I'm thus trying to build that class as an autocreated singleton:

public class InputManager : MonoBehaviour
{
private static InputManager singleton = null;

public static InputManager instance()
{
if(singleton == null) {
// We should create the singleton object here:
GameObject obj = new GameObject("InputManager");
}

return singleton;
}
}

I then extended that class to a point where I can now handle key press and axis move events independently from the legacy Unity input manager system. Except that this still requires all the joystick axes to be declared inside the Unity input manager as virtual axes as described in the section above. And I just found this page that could be an interesting solution to this problem. Let's see if this works.

Note: I tried to use the Unity Editor namespace element directly inside my Input Manager as follow:

#if UNITY_EDITOR
SerializedProperty axesProperty = serializedObject.FindProperty("m_Axes");
axesProperty.ClearArray();
serializedObject.ApplyModifiedProperties();
#endif

… But even with the preprocessor check this will produce an exception because it seems we are not allowed to call those functions inside a MonoBehavior class constructor [ That's fair enough… Since we are not supposed to use the UnityEditor in the exported game. ]

⇒ So I'm now trying to build a regular Unity Editor menu item for this:

public class InputManagement : MonoBehaviour
{
static void InputManager_clear()
{
SerializedProperty axesProperty = serializedObject.FindProperty("m_Axes");
axesProperty.ClearArray();
serializedObject.ApplyModifiedProperties();
}
}


⇒ And this works just fine! I got my menu item in my custom “NervTech” menu, and when I click on the menu item, all the entries from the unity InputManager window are removed just as expected.

Now time to automatically add our required joystick axis…

All good! So I now have a script available to create all the required virtual axis. And this makes me wonder: maybe I could consider building a unity package for this input management system and put it for sell on the Unity asset store ? ⇒ We'll get back to this point quickly!

But first, I really need to ensure I can use this new InputManager class as desired. So let's keep going… And it seems this is now working pretty well! Here is for instance the setup I use to control the camera just as before:

        // We should also define our horizontal/vertical virtual axis:
InputManager.getVirtualAxis("MoveHorizontal")
InputManager.getVirtualAxis("MoveVertical")

InputManager.getVirtualAxis("LookHorizontal")

InputManager.getVirtualAxis("LookVertical")

InputManager.on("Space_shortpress").connect((InputEvent evt) => {
if(!m_Jump) {
m_Jump = true;
}
return true;
});

InputManager.on("C_shortpress", "Joystick1Button0_shortpress").connect((InputEvent evt) => {
user.createFunctionObject();
return true;
});

InputManager.on("P_shortpress", "Joystick1Button1_shortpress").connect((InputEvent evt) => {
return true;
});

InputManager.on("Escape_shortpress").connect((InputEvent evt) => {
Debug.Break();
return true;
});

In fact, this is not exactly “just as before”: in the process I actually fixed the default keyboard mapping in unity that is made for QWERTY keyboards [and I have an AZERTY keyboard myself!]: now I can finally use my ZSQD keys as usual, yeepee!
I should probably write a complete article on this new InputManagement system to clarify how it works…

And with my new input management system showing/hiding the virtual keyboard was a piece of cake:

        // Initially we do not want this object to be visible:
gameObject.SetActive(false);

InputManager.on("V_shortpress", "Joystick1Button2_shortpress").connect((InputEvent evt) => {
gameObject.SetActive(!gameObject.activeSelf);
return true;
});

Yet, I think I could improvement the InputManager a bit further: it is not obvious that is the button corresponding to “Joystick1Button2” so we should provide alternative names for those gamepad buttons/axes (such as button_A, button_B, button_Square, etc) ⇒ I'm adding this on my todo list.

In this article we started with the Virtual keyboard display implementation, but then I eventually started to focus more on the Input management system lol. There is still a lot to do on the virtual keyboard: for now we can only display the keys but we cannot interact with them yet. But I think I should stop here for now, because I cannot really focus on this aspect of thing for the moment: instead I really want to try to push it further on the input management until I can produce a dedicated unity package for it.

So let's call it a day, and no worries: we will get back to the keyboard handling shortly anyway !

• blog/2020/0707_nervcode_virtual_keyboard_display.txt