blog:2020:0623_nervcode_function_object_display

Building the basic FunctionObject

So, in this article, we are going to investigate how to create and display “function objects” inside the NervCode environment. Obviously, we are not going for anything too fancy here, just the bare minimum to consider we are moving forward ;-).

First concept to setup is: when we press a button on the gamepad (say, the “A button” on an xbox controller for instance), we should place a new function object in the world.

⇒ So, let's first update our controller script to handle a button press:

    private void Update()
    {
        if(Input.GetKeyDown(KeyCode.Escape)) {
            Debug.Break();
        }

        RotateView();

        if (Input.GetButtonDown("Jump") && !m_Jump)
        {
            m_Jump = true;
        }

        if (Input.GetButtonDown("Create")) {
            // TODO: At this point we should create a function object where we are looking at.
            
            // For the moment, we just display a debug output:
            Debug.Log("Should create a function object.");
        }
    }

Of course now we need to add this “Create” button on the Edit->Project Settings->Input Manager tab:

input_manager_create_button.jpg

I really dislike this Unity window used to setup the user inputs… it looks so primitive. When you know that Unity can be used to create top of the art games/rendering that kind of display really doesn't seem to fit the bill.

Here for instance I entering the value “joystick button 2” manually, and I have no idea if this will correspond to the button I want on the controller, so I have to test the game to see ?! That's really not convinient.

  • Given the note I made just above, I add a look on the Internet, and found this page on GamePad support for Unity. I'm not quite sure how that would work on the android device for instance (ie. without a gamepad connected) but still it might make sense to try to use this [To be investigated later]
  • So I just tried with my “button 2” and this will correspond to the “X button” of the controller. So let's try “button 0” then… OK this is it. So it seems we have the mapping:
Button Id Xbox controller button
button 0 A
button 1 B
button 2 X
button 3 Y
When it's really time to start handling joystick on the android app version I should have a deeper look at this youtube video

Now we need to be able to display a simple prefab object on the Create button press, so let's create that:

  • For the moment we we just create a simple cube as prefab (eventually, I would like something more visually attracting of course, but this will come later)
  • I rescaled the box to (1,1,0.5)
  • I also created a very minimal “FuncMat” material to apply on that box by default.
  • Now let's also add a text slot on top of it as function name.

And here is our first “function object” prefab 8-):

simple_function_prefab.jpg

I had a bit of an hard time trying to figure out how to position the sub objects correctly relatively to the parent container: it turns out I was confused by the text object anchor which is by default set to “upper left” [and I needed “middle center” instead].

It's now time to actually create our first “function object” dynamically. So in the “Create” action we will first figure out where we are looking on the ground, and then instantiate our prefab at that location.

Yet, I don't want to put any of the “code/entities” handling in the script controlling the camera: so instead, lets create a CodeManager object in our scene, and we add a CodeManager script on it. Done

The CodeManager should support creating a new function object at a given position in the world. So this means, the ray casting logic should still be part of the camera script for now.

Here is the first results I got on this:

created_multiple_functions.jpg

⇒ As we can see on the image above, I have a problem with the placement position altitude: the hit points I get are always reporting a Y value of 0.5, while the plane is really at Y=0.0, how could that be ? Fixed: I replaced the default “Mesh collider” on the plane with a “Box collider” with a very small height (0.01) and this seems to do the trick.

Okay, so we can now place our function objects correctly on the ground. Yet, on instantiation I'm currently using an identity quaternion for the orientation: it could be interesting to try to align the newly created object to face the camera instead… So let's figure out how to do that.

… And it wasn't too hard actually, we just update the createFunctionObject() in my code manager to also take the hitpoint to camera direction as a “forward” vector:


    public void createFunctionObject(Vector3 position, Vector3 forward)
    {
        Debug.Log("Creating function object prefab at pos="+position);

        // We have to project the forward vector on the ground plane: 
        forward.y = 0;
        forward.Normalize();

        // Note: the default "forward" vector for our prefab is along -Z, so this is the axis from where we want to start the rotation:
        Quaternion att = Quaternion.FromToRotation(-Vector3.forward, forward);
        Instantiate(functionObjectPrefab, position, att, this.transform);
    }

And we pass the desired vector when we make a request to create such an object from the camera script:

    private void createFunctionObject()
    {
        // cf. https://docs.unity3d.com/Manual/CameraRays.html
        RaycastHit hit;
        
        // Ray ray = cam.ScreenPointToRay(Input.mousePosition);
        
        // Note: we want to cast a ray at the center of the screen,
        // so we use a position corresponding to (0.5,0.5,0.0)
        // But the screen space is defined in pixels (cf. file:///W:/Apps/Unity/2019.4.0f1/Editor/Data/Documentation/en/ScriptReference/Camera.ScreenPointToRay.html)
        // So it does from (0,0) to (pixelWidth-1, pixelHeight-1)
        
        Vector3 screenPos = new Vector3((cam.pixelWidth-1)/2.0f, (cam.pixelHeight-1)/2.0f, 0.0f);
        Ray ray = cam.ScreenPointToRay(screenPos);

        if (Physics.Raycast(ray, out hit)) {
            Transform ground = hit.transform;
            
            // We check if this object is our ground:
            if(ground.tag == "Ground") {
                // Debug.Log("Should place a function object at location: "+hit.point);
                codeManager.createFunctionObject(hit.point, this.transform.position - hit.point);
            }
            else {
                Debug.Log("Ignoring intersection result on non-ground object.");
            }
        }
    }

With these changes, the function objects are now created pointing towards the camera (at the moment the create button is pressed, obviously):

placing_with_orientation.jpg

Nice 8-)!

Next thing I want to change is the default function name: currently, all new function objects are just called “Hello world”: not so good for a function name :-). So, I would like the code manager to update the function name to a unique default name on creation, like “function1”, “function2”, etc.

And at the same time, I think I should really have the “function name” as part of the function object: so I should add a Function script to implement that behavior.

Here is the Function MonoBehavior script I created for this:

public class Function : MonoBehaviour
{
    protected string functionName;
    protected TextMesh textMesh;

    private void Start()
    {
        textMesh = GetComponentInChildren<TextMesh>();
    }

    public void setName(string name)
    {
        Debug.Log("Updating function name from "+functionName+" to "+name);
        functionName = name;
        textMesh.text = name;
    }

}

And then on function object creation I assign it a unique name with an incrementing ID:

    public void createFunctionObject(Vector3 position, Vector3 forward)
    {
        Debug.Log("Creating function object prefab at pos="+position);

        // We have to project the forward vector on the ground plane: 
        forward.y = 0;
        forward.Normalize();

        // Note: the default "forward" vector for our prefab is along -Z, so this is the axis from where we want to start the rotation:
        Quaternion att = Quaternion.FromToRotation(-Vector3.forward, forward);
        GameObject obj = Instantiate(functionObjectPrefab, position, att, this.transform);

        // We should also update the function name here:
        Function func = obj.GetComponent<Function>();
        func.setName("Function"+nextFuncId++);
    }

Yet, that doesn't seem to work as expected: it rather seems I have an exception when trying to set the name of the function as text value inside the textMesh component. Why that ?

… Actually I have a little theory about this: I think it could be that: I just instanciated the FunctionObject, so maybe the Start function is not called yet on that object. But I immediately call setName(), which uses a reference on the textMesh that is itself retrieved only inside Start… and thus, we have a null reference at this point (?)

⇒ I should clarify when exactly is the Start() method called… Ok, so according to this page I should really be retrieving my reference in the Awake method if I expect this to work here.

And indeed, it works now:

function_incremental_names.jpg

One last piece of behavior I would like to work on here is the capability to move an existing function object to a different location in the world. Basically, if the user is facing an object and he presses say button “B” on the gamepad, the camera/user should “pick up” the object and start carrying it around, until the user presses button “B” again. Let's see how we could do something like that :-)

  • So I created a “Pick” action, and then I call 2 functions to either “pick up” or “release” a payload, as follow:

        if (Input.GetButtonDown("Pick")) {
            if(this.payload != null) {
                releaseObject();
            }
            else {
                pickUpObject();
            }
        }

In the pickUpObject() function I check if I'm looking at a “FunctionShape” tagged mesh (and added that tag on the box shape inside the FunctionObject prefab):

        RaycastHit hit;
        Ray ray = getScreenRay();

        if (Physics.Raycast(ray, out hit)) {
            Transform obj = hit.transform;
            

            // We check if this object is the shape of the Function:

            if(obj.tag == "FunctionShape") {
                Function func = obj.GetComponentInParent<Function>();

                Debug.Log("Should set function "+func.getName()+" as payload.");
            }
            else {
                Debug.Log("Not picking non function object.");
            }
        }

So far so good: when I'm looking at an actual FunctionObject, I can retrieve its name. And when I'm looking somewhere else I get the second message. Yet I'm thinking about something now: when I'm “carrying” my payload, I also want to take collision on that payload into account when moving the camera… This means I should be updating the collision mesh for the camera handler somehow when in this state ?

  • For now, let's just implement a simple picking system reparenting our payload to the camera controller.
  • OK, So this works nicely: I can pick an object, move it around, and then release it.
  • Yet I thought I could try adding a “RigidBody” component to my FunctionObject, and from that point it seems I cannot detect it anymore for picking… how could that be ? Okay, so I finally realized that this was due to the fact that I will rather get an intersection with the FunctionObject itself directly in that case (not quite sure why…) so I updated the code accordingly:

        if (Physics.Raycast(ray, out hit)) {
            Transform obj = hit.transform;
            
            // We check if this object is the shape of the Function:
            Function func = obj.GetComponent<Function>();
            if(func == null) {
                // Check maybe in the parent GO:
                func = obj.GetComponentInParent<Function>();
            }

            if(func != null) {
                Debug.Log("Should set function "+func.getName()+" as payload.");
                payload = func.gameObject;
                payload.transform.parent = this.transform;
                payload.transform.localPosition = new Vector3(0.0f,0.1f,1.0f);
                payload.transform.localRotation = Quaternion.identity;
            }
            else {
                Debug.Log("Not picking non function object: "+obj.name);
            }
        }

As shown in the code above, I don't need the “FunctionShape” tag anymore with that implementation.
  • Yet, the behavior I observed when using a RigidBody component is a bit “too chaotic”: I could consider disabling that component while carrying the payload, but even then, it seems far too easy to move those objects when touching them (⇒ Maybe I could simply increase their mass then ?)
  • Cool, this works! I increased the mass value to 40 units for the FunctionObject and added support to enable/disable its rigidbody when carrying with this kind of function:

    public void enableRigidBody(bool enabled)
    {
        if(enabled) {
            rb.isKinematic = false;
            rb.detectCollisions = true;
        }
        else {
            rb.isKinematic = true;
            rb.detectCollisions = false;
        }
    }

Next thing I would like to try now is to animate the pick up/release actions to make them more natural: we should not be changing the local position/attitude of the payload immediately, instead we should change these progressively.

  • I found this article on how to perform animations: not kind sure this will work for me, but let's have a look anyway!
  • As discussed in many locations, creating an “AnimatorController” dynamically doesn't seem to be possible (?). But we might have a working solution around this as described on this page
  • Hmmm… Actually, trying to use an animator + animator controller + animation clip doesn't seem to be what I need here: this is definitely too complex, and I'm not sure I will get the correct results: I think I should rather just try to animate the position of my object manually!
  • Thus I built a dedicated “LocalPositionAnim” class as follow:

public class LocalPositionAnim : MonoBehaviour
{
    private Vector3 targetPosition;
    private Vector3 startPosition;
    private bool animating = false;
    private float startTime = 0;

    private float duration = 0;

    private Transform target;

    // Start is called before the first frame update
    void Start()
    {
        
    }

    // Update is called once per frame
    void Update()
    {
        if(!animating)
            return;
        
        // Get the current time:
        float curTime = Time.time;
        float ratio = (curTime - startTime) / duration;

        ratio = Mathf.Clamp01(ratio);

        // Ratio will be between 0 and 1.
        Vector3 pos = startPosition + (targetPosition - startPosition) * ratio;
        target.transform.localPosition = pos;

        if(ratio >= 1.0f) {
            animating = false;
        }
    }

    void animate(Transform tgt, Vector3 targetPos, float dur)
    {
        target = tgt;
        targetPosition = targetPos;
        startPosition = tgt.localPosition;
        startTime = Time.time;
        duration = Mathf.Max(dur, 0.001f);
        animating = (targetPos - startPosition).magnitude > 1e-5f;
    }
}

  • Now I can add that script on my User game object dynamically, and use it to animate the payload position!
  • Also, I implemented a similar class to be able to control the attitude of my payload object, and the results seem just fine:

So this is it for today: I'm quite happy with the results achieved so far: we can create the function objects, move them around, we have physics activated, gamepad is working fine: so this looks like a good start.

Next thing I think I should focus on would be the display of some kind of virtual keyboard to be able to quickly type names and/or support for input/output pins on the function objects: these could be really interesting enhancements to work on!

  • blog/2020/0623_nervcode_function_object_display.txt
  • Last modified: 2020/07/10 12:11
  • by 127.0.0.1