Click a planet on screen. Simple, right? You might reach for a ray-sphere intersection test, but that's actually the wrong tool. Small spheres, floating-point precision, and camera perspective conspire to make true 3D picking unreliable. Screen-space distance is simpler, faster, and more user-friendly.
The Raycasting Problem
Raycasting computes where a ray from the camera through the mouse position intersects scene geometry. For a sphere, you solve a quadratic equation. The math is elegant but creates practical issues when objects are small.
- Sub-pixel targets: A distant sphere might render as 3 pixels. Your mouse must land exactly on those pixels.
- Floating-point drift: The ray origin, direction, and sphere position accumulate error. Small targets amplify tiny inaccuracies.
- Touch unfriendly: Mobile users have fat fingers. Requiring pixel-perfect accuracy creates frustration.
- Depth ambiguity: When spheres overlap in screen space, raycasting picks the frontmost—not necessarily the one the user intended.
The Screen-Space Solution
Instead of casting rays into 3D space, project your spheres to screen coordinates and compute 2D distance from the mouse. If the mouse is within a threshold of a projected center, that object is selected.
function getSelectedSphere(mouseX, mouseY, spheres, camera) { let closest = null; let closestDist = Infinity; for (const sphere of spheres) { // Project sphere center to screen const screen = projectToScreen(sphere.position, camera); if (!screen) continue; // Behind camera // 2D distance: mouse to projected center const dx = mouseX - screen.x; const dy = mouseY - screen.y; const dist = Math.sqrt(dx * dx + dy * dy); // Effective radius includes visual size + touch margin const visualRadius = sphere.radius * screen.scale; const hitRadius = Math.max(visualRadius + 15, 40); // Min 40px if (dist < hitRadius && dist < closestDist) { closest = sphere; closestDist = dist; } } return closest; }
The Projection Math
To project 3D to 2D, transform the world position to camera space, then apply perspective division. The key insight: points in front of the camera have positive Z in view space.
function projectToScreen(worldPos, camera) { // Transform to camera space const dx = worldPos.x - camera.x; const dy = worldPos.y - camera.y; const dz = worldPos.z - camera.z; // Dot with camera basis vectors const viewZ = dx * camera.forward.x + dy * camera.forward.y + dz * camera.forward.z; // Behind camera? Skip it if (viewZ < 0.01) return null; // Perspective divide const scale = 1.0 / viewZ; const viewX = (dx * camera.right.x + dy * camera.right.y + dz * camera.right.z) * scale; const viewY = (dx * camera.up.x + dy * camera.up.y + dz * camera.up.z) * scale; // Map to screen pixels return { x: screenCenterX + viewX * screenWidth, y: screenCenterY - viewY * screenHeight, scale: scale }; }
Touch-Friendly Hit Zones
The real win: you can decouple visual size from interaction size. A 5-pixel sphere can have a 40-pixel hit zone. Users don't need to be surgeons to click your UI.
Optimizations
1. Early Rejection
Skip spheres behind the camera before doing any math. A single dot product tells you if an object is in front or behind.
// Dot product of (object - camera) with camera forward const toObject = subtract(sphere.position, camera.position); const dot = dot3(toObject, camera.forward); if (dot < 0) continue; // Behind camera, skip entirely
2. Squared Distance
sqrt() is expensive. Compare squared distances against squared thresholds to skip the square root entirely.
const distSq = dx * dx + dy * dy; const hitRadiusSq = hitRadius * hitRadius; if (distSq < hitRadiusSq && distSq < closestDistSq) { closest = sphere; closestDistSq = distSq; }
3. Spatial Partitioning
For thousands of objects, divide screen space into a grid. Only test objects in the cell containing the mouse (plus neighbors). O(n) becomes O(1) average case.
4. Unity-Specific: BurstCompiler + Jobs
In Unity, NativeArrays with Burst-compiled jobs can process thousands of sphere projections in parallel. The selection loop becomes vectorized SIMD operations.
[BurstCompile] public struct SelectionJob : IJobParallelFor { [ReadOnly] public NativeArray<float3> positions; [ReadOnly] public NativeArray<float> radii; [ReadOnly] public float4x4 viewProj; [ReadOnly] public float2 mousePos; [ReadOnly] public float2 screenSize; public NativeArray<float> distancesSq; // Output public void Execute(int i) { float4 clip = math.mul(viewProj, new float4(positions[i], 1)); if (clip.w < 0.01f) { distancesSq[i] = float.MaxValue; return; } float2 ndc = clip.xy / clip.w; float2 screen = (ndc * 0.5f + 0.5f) * screenSize; float2 delta = screen - mousePos; distancesSq[i] = math.dot(delta, delta); } }
Key Takeaways
- Raycasting small spheres is unreliable—floating-point errors and sub-pixel targets hurt UX
- Project to screen space, measure 2D distance. Simpler math, better results
- Expand hit zones beyond visual radius for touch-friendly interaction
- Skip sqrt() by comparing squared distances
- For large object counts, use spatial partitioning or Burst-compiled parallel jobs