eraser effect in plane using shader

i am trying to produce an eraser for a material on a plane. The way i am thinking of doing this is by passing an array to the shader telling the shader where the material should be transparent; if the value from the array is 0, i return no color for the material (i.e. tranparent). i have 2 problems:

  1. how do i declare and pass an array in CG?

  2. does this way work and if so, is this the best way to do this? I am thinking that it might be very demanding…

UPDATE

ok, so i have this shader:

Shader "mShaders/Holes1" {
    Properties {
      _MainTex ("Texture (RGB)", 2D) = "white" {}
      _SliceGuide ("Slice Guide (RGB)", 2D) = "white" {}
      _SliceAmount ("Slice Amount", Range(0.0, 1.0)) = 0.9
    }
    SubShader {
      Tags { "RenderType" = "Opaque" }
      Cull Off
      CGPROGRAM
      //if you're not planning on using shadows, remove "addshadow" for better performance
      #pragma surface surf Lambert 
      //addshadow
      struct Input {
          float2 uv_MainTex;
          float2 uv_SliceGuide;
          float _SliceAmount;
      };
      sampler2D _MainTex;
      sampler2D _SliceGuide;
      float _SliceAmount;
      void surf (Input IN, inout SurfaceOutput o) {
          clip(tex2D (_SliceGuide, IN.uv_SliceGuide).rgb - _SliceAmount);
          o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
      }
      ENDCG
    }
    Fallback "Diffuse"
  }

In my touch script i have the start function containing:

texture = new Texture2D(32, 16);
	// set texture in the inspector slot
	renderer.material.SetTexture("_SliceGuide", texture);
	
	// Fill the texture with white (you could also paint it black, then draw with white)
	for (var y : int = 0; y < texture.height; ++y) 
	{
		for (var x : int = 0; x < texture.width; ++x) 
		{
			texture.SetPixel (x, y, Color.white);
		}
	}
	// Apply all SetPixel calls
	texture.Apply();

and in Update i have:

if (Input.touchCount ==0  && !Input.anyKey) return;
	
	// Only if we hit something, do we continue
	var hit : RaycastHit;
	if (!Physics.Raycast (Camera.main.ScreenPointToRay(Input.mousePosition), hit)) return;
	
	// Just in case, also make sure the collider also has a renderer
	// material and texture. Also we should ignore primitive colliders.
	var renderer : Renderer = hit.collider.renderer;
	var meshCollider = hit.collider as MeshCollider;
	if (renderer == null || renderer.sharedMaterial == null ||texture == null || meshCollider == null) return;
	
	// Now draw a pixel where we hit the object
	var tex : Texture2D = texture;
	var pixelUV = hit.textureCoord;
	pixelUV.x *= tex.width;
	pixelUV.y *= tex.height;
	
	// add black spot, which is then transparent in the shader
	tex.SetPixel(pixelUV.x, pixelUV.y, Color.black);
	tex.Apply();

All seems to work on the computer, but on my iphone the object (here a mapped sphere) is rendered white with black spots after first touch (before first touch it’s rendered ok with the texture on) – the black spots are from where i touch the object. What am i doing wrong?

Nor difficult at all. Use your own texture2D as the array. Shaders enjoy reading from textures. In script, works like a 2D-array except you use SetPixel(x,y) to change color. Also some rules about writing it back to the graphics card after changes, and using SetPixels if you have lots of changes.

Values are from 0-1 (normal color range.) Pass it to the shader just like a “real” texture. The shader can and use whatever value (RGBA) for alpha – like o.col.a = col2.r;

Normal texture rules say the texture size won’t matter. It will stretch to cover the plane. Larger just means less pixelated. The initial math to figure out which pixels to change will be standard “screen percent” math – pixel = percentOver*textureWidth.