How do i declare a negative float while initializing a vector?

This should be basic, baby stuff.

Here’s a little snippet of my code:

public Vector2 SubjectScreenOffset = new Vector2(-0.125f, 0.0f);//offset from the centre of the screen. a value of +-0.5 will reach the edge of the appropriate axis

    void CalculateCameraOffset()
    {
        Vector2 screenSize = new Vector2(cam.pixelWidth, cam.pixelHeight);
        print("SubjectScreenOffset = " + SubjectScreenOffset);
        print("ScreenSize = " + screenSize);
        print("Testvalue: " + (SubjectScreenOffset.x * screenSize.x));
        Vector2 screenPixelPosition = new Vector2((SubjectScreenOffset.x * screenSize.x) + (screenSize.x * 0.5f), (SubjectScreenOffset.y * screenSize.y) + (screenSize.y * 0.5f));
        print("ScreenPixelPosition = " + screenPixelPosition);
        //Vector3 desiredPlayerPos = cam.ScreenToWorldPoint();
    }

and the first print results: “SubjectScreenOffset = (0.1, 0.0)”

Clearly that is not what i declared as the initial value. Two decimal places have been chopped off, and the sign is lost. What’s going on ?

They haven’t been chopped off. It’s just the Vector2/3’s ToString method that rounds the values to 1 decimal digit since the string representation of 3 floats would else be quite large.

The value also will keep it’s sign and it’s also printing for me. You either haven’t seen it or you do something different from what you showed here. Keep in mind that your SubjectScreenOffset variable is a public variable, so the value you set in the field initializer is only relevant when you attach the script to the gameobject. From now on the value is serialized and can be changed in the inspector which you might have done. So check the value in the inspector.