True-To-Life 3D Object Rotations Using A Mouse

A new perspective on interpreting the mouse's movement.

By  Raffi J. Kasparian, Annandale, Virginia USA

Raffi Kasparian <> has been programming on the Macintosh as a hobby since 1990 when he bought his first Mac (an SE).  Languages include Pascal, C, C++ and Java.  He won the Mactech Programmers' Challenge in July 1993 and again in May 1995.  He holds his degrees in music (piano performance) and earns his living as a classical pianist. 


A simple world globe is a sphere rotating around a linear axis.  Slightly fancier world globes can rotate any direction around a central point.  When I rotate a globe of this type with my hand, it follows my intentions so precisely that the exact spot on the globe I touch remains in contact with my hand until I let go.  I can rotate the globe easily and precisely to any position I desire.  This is an example of real life object rotation and what 3D rotation programs should strive to emulate.  When a user clicks a mouse somewhere on a 3D computer image and drags the cursor to a new location, the virtual object should respond just as intuitively as the globe.  Unfortunately, most computer programs don't work this way.  Why not?  Simply stated, most programs are nonchalant about interpreting the user's input and consequently rotate the object incorrectly.  Many times the error is subtle.  Other times it is gross.  Some would say that the very title of this article is an oxymoron.  On the contrary, this article will show how a 2D pointing device such as mouse can be used to accurately convey 3D rotation instructions to a computer thereby giving precision and an intuitive feel to the rotation process.

Mapping 3D Coordinates to Screen Coordinates

Sight Lines

A fundamental operation in accurately presenting 3D images on a 2D surface is converting from one dimension to the other.  Imagine that you are sitting in a room looking out through a glass window at your neighbor's house (Figure 1).  Keeping your head as still as possible, you use a grease pencil to trace an image of the house onto the glass.  If your hand-eye-coordination is good, you will have an exact outline of the house when you are finished.  Whereas the house may be 20 feet tall, its window image may be only a few inches tall or less depending on your distance from the house.  For every point p on the house, you will draw at the point p' where a line from point p to your eye intersects the window.  These lines are called sight lines and they necessarily meet in one point in the eye of the viewer.

Figure 1.  Side View of Window and 3D Object

This situation is analogous to a computer drawing 3D images onto a 2D video screen.  Before we proceed, please take a moment to familiarize yourself with the orientation of the 3D axes with respect to a computer screen (Figure 2) .  When looking straight at the screen, the x-axis is horizontal and increases to the right, the y-axis is the depth axis and increases as it moves away from the viewer, and the z-axis is vertical and increases as it moves up.  Be sure to contrast this with the standard computer screen 2D axes (Figure 3) where the x-axis is horizontal and increases to the right, the y-axis is vertical and increases downwards and the origin (0, 0) is the top left corner of the screen.

Figure 2.  3D Axes When Looking Straight at the Screen

Figure 3.  Local 2D Axes When Looking Straight at the Screen

Figure 4 below is a slight modification of Figure 1.  In it, the window has been replaced by a video screen and the house is now a virtual 3D object.  We are looking at the screen from its side which puts the real world to its left and the virtual world to its right.  The horizontal line is our depth or y-axis and the vertical line is our height or z-axis.  We ignore the width axis (x) for the moment which would be the distance away from this very page you are reading.  For convenience, we place our screen on the y = 0 plane and our viewer on the y-axis.  Point v represents the viewer's eye, p is a point on the 3D image and p' is the point on the screen where the computer will draw p.  Points behind the screen have positive y-coordinates while points in front of the screen have negative y-coordinates.  So, for example, vy is negative, py is positive and p'y = 0.  Figure 5 is a view of the same situation from above allowing us to see the x-axis.

Figure 4.  Side View of Video Screen and Virtual 3D Object

Figure 5.  Aerial View of Video Screen and Virtual 3D Object

To map a 3D point p on the house to a 2D point p' on the screen, we imagine a straight line from point p to the user's eye.  Point p' is located on the screen at the intersection of this virtual sight line and the screen.  As a matter of fact, all 3D points located on this sight line are mapped to the one 2D point where this sight line intersects the screen.  Some computer programs incorrectly assume that sight lines are parallel to the depth axis resulting in an incorrect mapping of 3D points to screen points.

Determining the Parameters of the 3D World

Before we can tell our computer how to perform the operation of mapping 3D coordinates to screen coordinates, two decisions have to be made.  First, we need to decide how far the viewer is from his screen and what his screen resolution is set to.  Obviously, we have no control over these factors but we can make intelligent assumptions.  If we assume, for example, that the typical user keeps a distance of 2 feet between himself and the screen and that his screen displays 50 pixels per inch then we can estimate his distance from the screen to be 24 X 50 = 1200 pixels.  This gives the viewer coordinates of (0, -1200, 0).  Second, by placing the screen on the y = 0 plane we have already decided that the 3D origin will lie somewhere on the plane of the screen, but we still need to decide where on the screen it is.  We'll assume that the average user looks straight into the screen center.  Since we have already decided that he is located on the y-axis the most sensible choice for placement of the origin may be the center of the screen.  Wherever we decide this point to be, it will have local screen coordinates (originx, originy).

Deriving The Conversion Function

Now that the parameters of our 3D world have been decided, we set about to find a function relating the 3D coordinates of any given point p, to its local screen coordinates p'.  Keeping in mind that the distance from any point p to the plane x = 0 is its x-coordinate px, its distance to the plane y = 0 is its y-coordinate py and its distance to the plane z = 0 is its z-coordinate pz , we reason as follows:
p'x/px = vy/(vy - py) From the law of similar triangles.  (Refer to Figure 5.)
p'x = pxvy/(vy - py) Solving for p'x.
p'y = 0    By definition.
p'z/pz = vy/(vy - py) From the law of similar triangles.  (Refer to Figure 4.)
p'z = pzvy/(vy - py) Solving for p'z.
We have now computed the 3D coordinates of p' (p'x, p'y, p'z), but further transformation is required to put our point p' into local screen coordinates.  Since the computer's 2D origin is the top left corner of the screen and since its vertical axis increases downwards,
p'screenx = p'x + originx and p'screeny = -p'z + originy.
The conversion from 3D coordinates to 2D screen coordinates is implemented by SpacePoint.toScreenCoord() which is called for each point of the object whenever the object has been rotated and needs to be redrawn.

class OrderedTriple{//a class to represent a 3D coordinate
 public double x, y, z;

 static OrderedTriple origin = new OrderedTriple(0, 0, 0);
 static OrderedTriple xAxis = new OrderedTriple(1, 0, 0);
 static OrderedTriple yAxis = new OrderedTriple(0, 1, 0);
 static OrderedTriple zAxis = new OrderedTriple(0, 0, 1);

 public OrderedTriple(double x, double y, double z){
  this.x = x;
  this.y = y;
  this.z = z;

//3D coordinate with the ability to map to the 2D screen
class SpacePoint extends OrderedTriple{
 public int screenx, screeny;//its 2D screen coordinates

 //the 2D screen coordinates of the 3D origin which happens to be located on the screen
 public static int originx, originy;

 //the 3D coordinates of the person viewing the screen
 public static OrderedTriple viewer;

 //Convert a 3D coordinate to its 2D screen coordinates
 public void toScreenCoord(){
  double M = (double)viewer.y/(viewer.y - y);
  screenx = (int)(Math.round(M * x)) + originx;
  screeny = - (int)(Math.round(M * z)) + originy;

Mapping Screen Coordinates to 3D Coordinates

The Cursor's Path

When the user drags his mouse on a 3D image to rotate it, should the object's position after rotation depend only on the final position of the cursor or should it also depend on the cursor's actual path?

Figure 6.  A 3D Virtual Clock Before and After Rotation

  To answer this question let's consider the 3D image of a clock (Figure 6).  In Frame 1, the clock's back is facing the user and the user's goal is to rotate the clock to its normal position.  First, he drags in a straight line from a to b as depicted in frame 1.  As expected, the clock turns over to face the user (Frame 2).  He then notices that the clock is upside down.  If he drags in a straight line from  b back to a exactly reversing his path, we would expect the clock to return to its original position.  If however, he drags in a semi-circle from b to a as depicted in Frame 2 we would expect the clock to rotate counter-clockwise to the position in Frame 3.  Some computer programs would bring the clock back to its original orientation in both cases reasoning that the cursor returned to its starting position and it doesn't matter where it went in between.  In a 2D world this logic would be correct but it is obviously undesirable in the 3D world.  We therefore conclude that an object's position after rotation depends on the final position of the cursor as well as the path it took to get there.

Calculating the Path's Beginning and End

Having established the importance of the cursor's path, how can we interpret it correctly to the computer?  We have seen how any 3D point can be mapped to one 2D screen point.  A 2D screen point on the other hand, can be mapped to an infinity of 3D points all located on the sight line passing through it.  Yet, when the user clicks the mouse somewhere on a 3D image, he has in mind not an infinity of points but one particular point.  This point to which we will refer as p1 is the point on the object's surface that would be visible if the cursor were not blocking its view.  When the user drags the cursor, he expects to be manipulating p1 just as if it were a spot on a world globe that he were moving by hand.  This means that we must interpret the cursor's movement as though it were a point on the object's surface moving through 3D space.  Every time the operating system sends our program a mousedrag event, we must map its starting (pre-rotation) and ending (post-rotation) screen coordinates to 3D coordinates.
To be completely candid, unless the user's computer is infinitely fast (mine isn't, is yours?), our program will never know the cursor's exact path.  Relying on a succession of mousedrag events to keep it informed as to the cursor's whereabouts, the program must approximate the path with  a series of straight line path segments (Figure 7).  The quicker the computer, the more path segments a given path will be segmented into and the better the approximation will be.  In any case, the object will be rotated path segment by path segment in an attempt to approximate the cursor's exact path.

Figure 7.  How the Mouse's Path is Approximated

Calculating p1's Pre-Rotation Coordinates

The following method of calculating p1's exact pre-rotation coordinates requires that the program's internal representation of an object rely on a list of points and connecting edges.  Such a representation will in effect be a network of polygonal faces whose 2D screen representations are 2D polygons whose shapes at any time are determined by the orientation of the object.  To determine p1's pre-rotation coordinates, we loop through the object's faces looking for the one whose 2D screen representation contains the cursor.  Although Java's Polygon.inside() might seem ideal for this purpose, it isn't perfect and I found it necessary to write the function SmartPolygon.inside() which is mathematically precise.  Once we have found the face that contains the cursor, we draw an imaginary sight line from our viewer through the cursor and into the screen until it intersects this face.  The equation of the sight line derives from its direction vector and the 3D coordinates of the cursor.  The equation for the plane of the intersected face derives from any three non-collinear points on its surface (corners of the 3D face which are also vertices of the object are convenient choices).  Simultaneous solving of these equations yields the exact pre-rotation coordinates of p1.
The calculation of p1's pre-rotation coordinates is implemented by DrawingCanvas.mouseDown() whenever the operating system delivers a mousedown message to the program.

//A drawing surface that knows how to draw 3D objects
class DrawingCanvas extends Canvas{

 //A DrawingCanvas class member that remembers the last 3D coordinates of the
 //selected point.
 OrderedTriple lastCoord = null;

 //When a mousedown occurs, find out and remember what point on the object's
 //surface has been clicked on.
 void mouseDown(
  Event event)
  lastCoord = toSurfaceCoord(event.x, event.y);

 //A DrawingCanvas class member whose implementation is left up to the individual
 //programmer.  The 3D virtual object.
 Solid theSolid;

 //Calculate what 3D point on the object's surface corresponds to the screen coordinates 
 //(X, Y).  WARNING: Italicized code is pseudo-code.  Will not compile unless
 //pseudo-code is replaced by working code.
 OrderedTriple toSurfaceCoord(
  int X,
  int Y)//2D screen coordinates of the cursor

  //We will return surfacePoint
  OrderedTriple surfacePoint = null;

  //Set cursorPoint to the 3D coordinates of the cursor
  SpacePoint cursorPoint =  new SpacePoint(X, Y);

  for every face f of theSolid{
   Polygon p = face f's 2D polygonal screen image;
   if(SmartPolygon.inside(p, X, Y)){
    OrderedTriple p1, p2, p3;
    Set p1, p2 and p3 to
      three distinct, non-collinear points on face f
    surfacePoint =
      OrderedTriple.sectPlaneLine(p1, p2, p3,
       cursorPoint, SpacePoint.viewer);
  if(surfacePoint == null){//The user clicked outside the 3D image

   //We'll talk about this later
   surfacePoint = outOfBoundsMouseDown(cursorPoint);
  return surfacePoint;


 //Find the 3D coordinates of the intersection of the plane containing points P1, P2, P3
 //and the line containing L1, L2
 static OrderedTriple sectPlaneLine(
  //Three distinct non-collinear points on the plane
  OrderedTriple P1,
  OrderedTriple P2,
  OrderedTriple P3,

  //Two distinct points on the line
  OrderedTriple L1,
  OrderedTriple L2)
  OrderedTriple n = P2.minus(P1).cross(P3.minus(P2));
  OrderedTriple l = L2.minus(L1);
  double A =;
  if(A == 0) return null;
  double B =;
  double C =;
  double K = (B - C)/A;
  return l.times(K).plus(L1);

class SmartPolygon{

 //Determine whether or not a point is inside the polygon based on the reasoning that
 //an inside point will cross the polygon's edge an odd number of times as it is pulled
 //out of the polygon while an outside point will cross the edge an even number of
 static public boolean inside(
  Polygon p,
  int x,
  int y)
  //guarantee that the first edge we test doesn't start at y or that all edges start and end
  //on y
  int start = 0;
  for(int i = 0; i < p.npoints; ++i){
   if(p.ypoints[i] != y){
    start = i;

  int prevdir = 0, postdir = 0, crossings = 0;
  int p1x, p1y, p2x, p2y;//edge endpoints
  p2x = p.xpoints[start];
  p2y = p.ypoints[start];


  //for every edge (p1x, p1y) -> (p2x, p2y)
  for(int i = start; i < start + p.npoints; ++i){
   p1x = p2x;
   p1y = p2y;
   p2x = p.xpoints[i % p.npoints];
   p2y = p.ypoints[i % p.npoints];

   if(p1x == p2x && p1y == p2y) continue;//that is real
   else if(p1y == y && p2y == y){//if horizontal at y

    //and x is between the endpoints
    if((p1x <= x && p2x >= x) ||
      (p1x >= x && p2x <= x)){
     return true;//on edge
   else if(p1y == y){//if it starts at y
    postdir = p2y < y ? '+' : '-';

    //and continues in the same direction as the edge ending on y and crosses y to
    //the left of x
    if(prevdir == postdir && p1x < x){
   else if(p2y == y){//if it ends on y
    if(p2x == x){//and x
     return true;//on point
    //remember what direction the line is going
    prevdir = p1y < y ? '-' : '+';
   //if it crosses from above to below or below to above
   else if((p1y < y && p2y > y) || (p1y > y && p2y < y)){
    //find the intersection
    double dx = p2x - p1x;
    double dy = p2y - p1y;
    double intersect = p1x + (y - p1y)*dx/dy;
    if(Math.round(intersect) == x) return true;//on edge
    if(intersect < x) crossings++;
  //the point is inside if it crossed an odd number of edges on its way out
  return crossings % 2 == 1;

Calculating p1's Post-Rotation Coordinates

To determine p1's post-rotation coordinates, we consider that no matter how the object rotates, p1 will always be the same distance from the center of rotation.  If we choose the origin as the center of rotation (which we do for simplicity) this means that p1's post-rotation coordinates will lie on a sphere with center at the origin and with radius of ||p1||.  For lack of anything better to call it, we will refer to this sphere as the "sphere of influence"  We now draw an imaginary sight line from our viewer through the cursor's new location and into the screen until it intersects this sphere.  (We assume for now that the user has not dragged the cursor outside the bounds of this sphere).  Simultaneous solving of the equations of this line and sphere gives the post-rotation coordinates of p1.  Since the line will intersect the sphere in two points unless it is tangent, we must pick the point whose y-coordinate is the same side of the sphere as p1y.  We now have the cursor's 3D path from p1's pre-rotation coordinates to p1's post-rotation coordinates.
The calculation of p1's post-rotation coordinates is implemented by  DrawingCanvas.mouseDrag() every time the operating system delivers a mousedrag message to the program.

 //A DrawingCanvas class member that remembers the distance of lastCoord from the
 double lastRadius;

 //When the cursor has been dragged, calculate where the selected point on the object's
 //surface has been dragged to and cause the rotation to happen.
 void mouseDrag(
  Event event)

  //map the new cursor location to a 3D point on the "sphere of influence"
  OrderedTriple newCoord =
    toSphericalCoord(event.x, event.y, lastRadius);

  rotateTheSolid(lastCoord, newCoord);//We'll talk about this later
  lastCoord = newCoord;//remember the new cursor location
  repaint();//No sense rotating the solid if you don't draw it again


 //A DrawingCanvas class member.  True if the cursor is mapped to a point on the back
 //side of the "sphere of influence"
 boolean backSide;

 //Calculate the 3D point on the surface of a sphere with radius R and center at the
 //origin that corresponds to the screen coordinates (X, Y)
 OrderedTriple toSphericalCoord(
  int X,
  int Y,
  double R)

  //3D coordinates of the cursor
  SpacePoint screenPoint = new SpacePoint(X, Y);

  //intersect the line from the viewer to the cursor with the sphere
  OrderedTriple[] sects = OrderedTriple.sectSphereLine(
    OrderedTriple.origin, R, screenPoint,

  //If the line doesn't intersect the sphere then return the point on the largest visible
  //cross section to which the cursor should refer.  We'll talk about this later.
  if(sects == null){
   inBounds = false;
   return outOfBoundsMouseDrag(screenPoint, R);

   //If the line intersects or is tangent to the sphere return the correct intersection
   //point, figure out which intersection point is in front of the other so that we can
   //return the correct one.
   OrderedTriple back, front;
   if(sects[0].y > sects[1].y){
    back = sects[0];
    front = sects[1];
    front = sects[0];
    back = sects[1];
   return backSide ? back : front;

 //Find the intersection(s) of the sphere with radius length r and center at point c and
 //the line containing points a and b.
 static OrderedTriple[] sectSphereLine(
  OrderedTriple c,
  double r,
  OrderedTriple a,
  OrderedTriple b)

  OrderedTriple v = b.minus(a);
  OrderedTriple d = a.minus(c);

   double[] k = solveQuadratic(v.lengthSquared(),
     2*, d.lengthSquared() - r*r);
   OrderedTriple[] answer = { v.times(k[0]).plus(a),
     v.times(k[1]).plus(a) };
   return answer;
  catch(Exception e){//the line doesn't intersect the sphere
   return null;


 //For the quadratic equation ax2 + bx + c = 0 return (-b ± *(b2 - 4ac)) / 2a
 public static double[] solveQuadratic(
  double a,
  double b,
  double c)
  throws Exception

  double roots[];
  double epsilon = -0.1;
  if(a == 0){
   roots = new double[1];
   roots[0] = -c/b;
  double d2 = b*b - 4*a*c;
  if(d2 < epsilon){
   throw new Exception("solveQuadratic error: d2 < 0");
  else if(d2 < 0) d2 = 0;

  double d = Math.sqrt(d2);

  double r1 = (-b + d)/(2*a);
  double r2 = (-b - d)/(2*a);
  if(Math.abs(a) < 1e-10){
   roots = new double[3];
   roots[2] = -c/b;
   roots = new double[2];
  roots[0] = r1;
  roots[1] = r2;
  return roots;

 When p1 is Too Difficult to Compute

From the time the user depresses a mouse on a 3D image and starts dragging to the time he releases the mouse there is a one time computing cost of finding the exact 3D surface coordinates p1 on which the mousedown occurred.  Thereafter, while the user is dragging, keeping up with p1's movement is simple.  If, however, the object has an enormous number of faces or its shape is complicated by holes or concavities, the time required for the computer to calculate p1 by this method may be excessive.  If it is possible to define the object's surface mathematically, the most efficient method of calculating p1 will probably be to mathematically intersect the appropriate sight line with the equation for the solid just as OrderedTriple.sectSphereLine() does to find p1's post-rotation coordinates.  Spheres, cylinders and cones, for example, technically have an infinite number of faces but can be represented with simple mathematical equations.  Obviously, the decision to represent an object's surface with mathematical equations must be made on a case by case basis.  When mathematics is not a practical option either, one must concede some accuracy in the interest of speed by finding a simpler surface that approximates the object and computing p1 as though it were on this simpler surface.  So, a teapot might be approximated by a sphere, a spaceship by a rectangular solid or a mushroom by an inverted cone.  Just remember that whatever translations or rotations are performed on the main solid must also be performed on the substitution solid and vice versa.

The Rotation

Imagine an object floating in space. It can move (translate) in any direction and spin (rotate) in any direction.  It is a free body.  If we fix in space just one point on its surface, it will no longer be able to translate but it will still be able to rotate around that point in any direction.  If we now fix in space another point on its surface, its range of motion will be further restricted to rotation about a linear axis that contains those two fixed points.  If we fix in space just one more point on its surface, making sure that the point is not located on the axis of rotation, we will have fixed the entire object in space. It will no longer be able to translate or rotate in any direction.  From this we conclude:
Premise 1. The location and orientation of an object are completely determined from the location of any three distinct non-collinear points on it or in its space.

Premise 2. If an object has been rotated, the pre-rotation and post-rotation coordinates of any three distinct non-collinear points p1, p2 and p3 on the object or in its space are sufficient to determine the precise rotation of the object.

Determining the Rotation Parameters

Our first step in determining the rotation of an object on which the user has performed a mousedrag is to find three distinct non-collinear points p1, p2 and p3 on the object or in its space and to determine their pre-rotation and post-rotation coordinates.  We will assume that the center of rotation is at the origin.  (If this is not the case, the original coordinate space can first be translated to a temporary coordinate space where the rotation point is the origin.  After the object is rotated, the temporary coordinate space can then be translated back to the original coordinate space).  Let the pre-rotation coordinates of p1 be a (ax, ay, az) and its post-rotation coordinates be d (dx, dy, dz).  Similarly, let the pre- and post-rotation coordinates of p2 be b and e and the pre- and post-rotation coordinates of p3 be c and f.
          pre-rotation --> post-rotation
p1:  (ax, ay, az) --> (dx, dy, dz)
p2:  (bx, by, bz) --> (ex, ey, ez)
p3:  (cx, cy, cz) --> (fx, fy, fz)
Let p1 be the point on the surface of the object that the user manipulates with the mouse.  Its pre- and post-rotation coordinates a and d were determined in the previous section.
To determine p2, we rely on a tacit expectation of the user.  When the user drags the cursor from one position to another, he expects the object to rotate around a linear axis.  Each time he drags the cursor this axis may be different because it is determined by the direction and distance of his drag but if the object does not rotate around the correct axis, he will notice the error.  This axis passes through the origin (of course) and is perpendicular to the plane defined by the origin, and p1's pre- and post-rotation coordinates.  Let p2 be an arbitrary point on this axis other than the origin.  We choose p2 = a X d.  Since p2 lies on the axis of rotation, its pre- and post-coordinates b and e are equal (b = e = a X d).
Since we know that our object was rotated about the origin, we can let p3 be the origin so that c = f = (0, 0, 0).  We now have enough data to define the essential rotation parameters which are graphically depicted in Figure 8.  The axis of rotation is simply a vector with coordinates b and the angle of rotation in radians is cos-1( ( a d )/( ||a|| ||d|| ) ).

Figure 8.  Graphic View of the Parameters of Rotation

Calculating the Rotation Matrix

Premise 3. For any desired rotation of an object about the origin, there is a unique 3X3 matrix that can multiply the pre-rotation coordinates of every point to yield its post-rotation coordinates.
Matrix multiplication is a common way for a computer program to rotate a 3D object.  One simply multiplies the pre-rotation coordinates of every one of the object's vertices by a matrix to yield its post-rotation coordinates.  From Premises 1, 2 and 3, it follows that if we can find one 3X3 matrix M that transforms the pre-rotation coordinates of p1, p2 and p3 to their respective post-rotation coordinates, then M will be the desired unique rotation matrix.  Here is the logic that will enable us to find M from the pre- and post-rotation coordinates of p1, p2 and p3:
M's desired properties:
M(ax, ay, az) = (dx, dy, dz)
M(bx, by, bz) = (ex, ey, ez)
M(cx, cy, cz) = (fx, fy, fz)
All variables except M are known, so we simply solve for M:

MP = N  by substitution into M's desired properties
MPP-1 = NP-1 multiplying both sides of the equation by the inverse of P
MI = NP-1 since any matrix times its inverse equals the identity matrix
M = NP-1 since any matrix times the identity matrix equals itself

Unfortunately, not all matrices have unique inverses so we must be careful in choosing p1, p2 and p3 to guarantee the existence of a unique inverse of P.  Since the existence of M is predicated on the rotation being around the origin, we can't consider the origin in our computations.  Nor can we consider any pair of points that is collinear with the origin.  So, p1 and p2 are useful but we must choose a p3 other than the origin.  Since any point that is in a definable position with respect to p1 and p2's pre-rotation coordinates will, after rotation, be in that same definable position with respect to their post-rotation coordinates, we'll set p3 to p1 X p2.  This sets p3's pre-rotation coordinates c to a X b and its post-rotation coordinates f to d X e.  Now that we have calculated a, b, c, d, e, and f, we can solve for M.
The process of determining the correct rotation matrix M from p1's pre- and post-rotation coordinates is implemented by Quick3X3Matrix.findRotationMatrix() which is called by DrawingCanvas.mouseDrag().

//A streamlined matrix specializing in rotating 3D coordinates about the origin
class Quick3X3Matrix{
 double[][] mat;
 static Quick3X3Matrix identity = new Quick3X3Matrix(

 //find a matrix M that will rotate the coordinate space around the origin to bring p1 to
 //n1 according to the user's expectations
 public static Quick3X3Matrix findRotationMatrix(
   OrderedTriple p1,
   OrderedTriple n1)
  double epsilon = 1e-5;//allow for some inaccuracy
  if(n1.isApprox(p1, epsilon)){
   return Quick3X3Matrix.identity();//no rotation necessary

  //Find the pre- and post-rotation coordinates of a second point.  We will choose a
  //point on the axis of rotation.  Ordinarily the cross product will be useful but if
  //p1 = -n1, it will return the zero vector which is not useful.  In that case just use
  //the y-axis which should always be a sensible choice.
  OrderedTriple p2 = n1.isApprox(p1.negative(), epsilon) ?
    OrderedTriple.yAxis : p1.cross(n1);
  OrderedTriple n2 = p2;

  //Find the pre- and post-rotation coordinates of a third point.
  OrderedTriple p3 = p2.cross(p1);
  OrderedTriple n3 = n2.cross(n1);

  //Use Matrix algebra to determine the rotation matrix
  return (new Quick3X3Matrix(n1, n2, n3)).times(
    Quick3X3Matrix.inverse(p1, p2, p3));

Having determined the correct rotation matrix, we may now rotate the solid.

 //Rotate the solid from lastCoord to newCoord according to the user's expectations.
 //WARNING: Italicized code is pseudo-code.  Will not compile unless pseudo-code is
 //replaced by working code.
 void rotateTheSolid(
  OrderedTriple lastCoord,
  OrderedTriple newCoord)

  Quick3X3Matrix M = Quick3X3Matrix.findRotationMatrix(
     lastCoord, newCoord);

  //rotate theSolid
  OrderedTriple v = null;
  for every vertex v of theSolid
   v = M.times(v);


 //A streamlined method that treats an OrderedTriple like a matrix to multiply it.
 OrderedTriple times(
  OrderedTriple p
  return new OrderedTriple(
    mat[0][0]*p.x + mat[0][1]*p.y + mat[0][2]*p.z,
    mat[1][0]*p.x + mat[1][1]*p.y + mat[1][2]*p.z,
    mat[2][0]*p.x + mat[2][1]*p.y + mat[2][2]*p.z);

Out of Bounds Cursors

A serious issue that has not yet been dealt with concerns how to interpret the cursor's position when it is outside the bounds of the object.  If out of bounds drags are ignored there will be a jerk in the objects position when the cursor re-enters the bounds at a different point than it left.  One smooth way of handling this is to "correct" the cursor's path by treating out of bounds points as if they were on the boundary edge (Figure 9).

Figure 9.  Correcting an Out-Of-Bounds Mouse Path

The Largest Visible Cross Section

But, just what is the boundary edge?  Since the user is rotating the object by dragging point p1 around the origin, p1's position is limited to the surface of a sphere with center at the origin and radius equal to p1's distance from the origin.  This sphere maps to our screen as a circle.  The boundary edge, which corresponds to the visible outer limits of the sphere, is the circumference of that circle.  One might easily and incorrectly assume that the visible outer limits of a sphere are a circular cross section with the same radius and center as the sphere.  Due to the laws of perspective, however, that can never be the case.  As may be seen in Figure 10, the visible outer limits of a sphere are a circular cross section with smaller radius than the sphere and whose center is closer to the viewer than the center of the sphere.  When the user drags the cursor out of bounds, it is the screen projection of this circle that the cursor has exited.

Figure 10.  The Largest Visible Cross Section

If we consider Figure 11, we can calculate the radius length r and the center point d of the largest visible cross section.  The sphere has radius length R and center at the origin.  Point v is the viewer.  R and v are known.

Figure 11.  Determining the Largest Visible Cross Section

dy/R = R/vy  law of similar triangles
dy = R2/vy  solving for d
dy2 + r2 = R2  Pythagorean theorem
r = *(R2 - dy2) solving for r
The largest visible cross section of the sphere is a circle on the plane y = dy with center at (0, dy, 0) and radius of r.
Correcting the Out of Bounds Cursor
Whenever the user selects a point on the object and drags the cursor out of bounds, we give the cursor a y-coordinate of dy.  Its x- and z-coordinates are then computed to bring it to the point on the surface of the sphere at depth dy nearest the cursor's screen position.  Correction of an out of bounds cursor is implemented by DrawingCanvas.outOfBoundsMouseDrag() which is called by DrawingCanvas.toSphericalCoord().

 //A DrawingCanvas class member.  True if the cursor directly maps to a point on the
 //surface of the object
 boolean inBounds;

 //Place the cursor on the nearest point on the largest visible cross section.
 OrderedTriple outOfBoundsMouseDrag(
  OrderedTriple cursorPoint,//3D coordinates of the cursor
  double R)//the radius of the sphere of influence
  //the largest visible cross section is located on the plane y = d
  double d = R*R/SpacePoint.viewer.y;

  //the radius of the largest visible cross section
  double r = Math.sqrt(R*R - d*d);

  double m = cursorPoint.length();//cursor's distance from the y-axis

  //what we will return as the cursor's 3D location
  OrderedTriple inBoundsPoint;

  inBoundsPoint = cursorPoint.times(r/m);
  inBoundsPoint.y = d;
  return inBoundsPoint;

Fine-Tuning the Process

Accessing the Back Side

If we confine a dragged cursor to the bounds of the largest visible cross section, it will always refer to a point on the front side of the sphere of influence.  This might seem reasonable at first but there are times when the user is rotating the object that it would be nice to have access to points on the back side of the sphere of influence.  This is certainly true if the user happens to click on a point on the object surface that already is on the back side of the sphere.  It is also true when the user desires to rotate the object all the way around the x- or z-axis.  OrderedTriple.SectSphereLine() always returns two points of intersection.  One is a point on the front side of the sphere and the other (with greater y coordinate) is a point on the back side of the sphere.  To enable access to the back side of the sphere we can choose the farther of the two points of intersection, but how do we decide when to select one over the other?
When the user first clicks on the image, the cursor is assigned to coordinates on the visible surface of the object which may be on the front or back side of the sphere.  (Code Excerpt 1)
Code Excerpt 1

 //Determine whether the selected point is in back or in front of the largest visible cross
 //section.  Place this within DrawingCanvas.toSurfaceCoord.
  //the largest visible cross section is located on the plane y = d
  double d = lastRadius*lastRadius/SpacePoint.viewer.y;

  backSide = surfacePoint.y > d;

Thereafter, while the user is rotating the solid we keep the cursor on that same side of the sphere of influence.  In other words, if the cursor was on the front side, we pick the intersection on the front side and if the cursor was on the back side we pick the intersection on the back side.  We make the following exception: Each time the user drags the cursor back into bounds after leaving bounds, we give him control over the other side of the sphere.  By alternating front and back sides this way, we allow the user to rotate the object full circle (or more) by dragging the cursor back and forth in and out of bounds.  (Code Excerpt 2)
Code Excerpt 2

 //Alternate front and back sides to allow the user to rotate the solid full circle.  Place
 //this within DrawingCanvas.toSphericalCoord just before returning one of the
  //we have determined that the cursor is within bounds but we haven't yet updated
  if(! inBounds){//we just re-entered bounds
   inBounds = true;//remember that we're in bounds

   //switch to the other side of the sphere of influence
   backSide = ! backSide;

Constraining Rotation to One of the Axes

Our algorithm now enables us to select any visible point on the object surface and to rotate the object to any orientation by dragging the point anywhere on the sphere of influence.  Whether the selected point is on the back or front side of the sphere of influence, it exactly follows the dragged cursor in accordance with true-to-life expectations.  We have therefore met the requirements for true-to-life rotation.  However, as anyone who has tried to draw a freehand curve with a mouse can testify, mice are not easy to control precisely.  If, for instance, we want to rotate a solid purely around the x-axis, we have to click on a surface point with x-coordinate of 0 and then drag the cursor precisely along a vertical line.  Any deviation from this line throws off our rotation in a way that can't be rectified by simply returning the cursor to the line.  Similarly, pure z-axis rotation requires clicking on a surface point with z-coordinate of 0 and dragging the cursor precisely along a horizontal line.  Pure y-axis rotation is even more difficult requiring the user to click on a point with y-coordinate of 0 (often impossible because it is on the back side of the sphere of influence) and then to drag the cursor in an exact circular path around the origin (that's a laugh).  These requirements for pure axis rotation are as true in the real world as they are in the virtual world, but it is easier to control a hand on a globe than it is to control a cursor on a virtual object.
As a convenience to the user, we further amend our algorithm to simplify pure axis rotation.  Constraint to the x- or z-axis (see Code Excerpt 3) is enabled when the user presses the shift key, the particular axis being determined by the initial direction of the dragged cursor.  While the cursor is dragged, we assign it an x-coordinate of 0 in the case of x-axis rotation or a z-coordinate of 0 in the case of z-axis rotation.
Code Excerpt 3

 //a DrawingCanvas member.  Possible values are 'x', 'y', and 'z' for constrained rotation
 //around the x-, y- or z-axis.  0 means no constraint.
 int axisConstraint = 0;

 //Constrain rotation to the x or z-axis.  Place this within DrawingCanvas.mouseDrag to
 //alter event.x and event.y before newCoord is computed.
  if((event.modifiers & Event.SHIFT_MASK) != 0){
   if(axisConstraint == 0){
    SpacePoint p = new SpacePoint(lastCoord);
    int dx = event.x - p.screenx;
    int dz = event.y - p.screeny;
    axisConstraint = dx*dx > dz*dz ? 'z' : 'x';
   if(axisConstraint == 'z'){
    lastCoord.z = 0;
    lastRadius = lastCoord.length();
    event.y = SpacePoint.originy;
   else if(axisConstraint == 'x'){
    lastCoord.x = 0;
    lastRadius = lastCoord.length();
    event.x = SpacePoint.originx;

Y-axis rotation is enabled whenever the user clicks on a point not on the virtual object.  We assign the cursor a y-coordinate of 0 and multiply its x- and z-coordinates to keep them on a pre-determined circle around the origin.
Code Excerpt 4

 //Place within toSurfaceCoord when we discover that the user has clicked outside of
 //the 3D image
  if(surfacePoint == null){//The user clicked outside the 3D image
   //Pretend he clicked on a point with depth 0 and enter y-axis rotation mode.
   inBounds = false;
   axisConstraint = 'y';
   surfacePoint = outOfBoundsMouseDown(cursorPoint);

 //a DrawingCanvas member. An arbitrary radius value we use when theSolid is
 //rotating around the y-axis
 double yrotRadius = 100;

 //Keeping the cursor's direction from the origin the same, bring it to the circumference
 //of a circle centered on the origin with radius length yrotRadius.  If the cursor
 //happens to be on the origin, it has no direction so return null.  The calling routine
 //will then ignore the mouseDown.
 OrderedTriple outOfBoundsMouseDown(
  OrderedTriple cursorPoint)
  double m = cursorPoint.length();
  if(m == 0) return null;

  //remember the new radius of the sphere of influence
  lastRadius = yrotRadius;

  //place the coordinate on this sphere
  //cursorPoint already conveniently has y-coordinate of 0
  return cursorPoint.times(lastRadius/m);

We continue to do this as the user drags the cursor. (Code Excerpt 5).  What the user can't do by hand, we do with a little bit of code.
Code Excerpt 5

 //Constrain rotation to the y-axis.  Place this within DrawingCanvas.mouseDrag before
 //newCoord is computed.  We must generate a newCoord that is exactly on the
 //circumference of a circle with radius length yrotRadius.  Since the usual method for
 //computing a coordinate based on the cursor's 2D position will not be exact, we have
 //to handle the computation of newCoord here too.
  if(axisConstraint == 'y'){

   //the cursor's 3D coordinates
   OrderedTriple cursorPoint =
     new SpacePoint(event.x, event.y);

   //if the cursor is on the origin we can't use it
   if( cursorPoint.x == 0 && cursorPoint.z == 0 ) return;
   //put newCoord on the circumference of the circle
   newCoord = cursorPoint.times(
     lastRadius/cursorPoint.length() );


By consistently applying the laws of perspective we have arrived at a method for allowing true to life mouse control over the rotation of a virtual 3D object.  This method is implemented in an applet I wrote named Archimedean which may be found on the internet at <>.  Keeping in mind the performance limitations inherent in a 100% Java implementation, the rotational response to cursor movements in Archimedean is the best I have seen in any 3D program on the web including VRML viewers.  Most unique is the care taken in selecting the exact point on an object's surface that corresponds to the cursor location.  When coupled with the correct interpretation of the cursor's path, this produces a remarkable feeling of "hands on" manipulation.