We present GyroWand, a raycasting technique for 3D interactions in self-contained augmented reality (AR) head-mounted displays. Unlike traditional raycasting which requires absolute spatial and rotational tracking of a user's hand or controller to direct the ray, GyroWand relies on the relative rotation values captured by an inertial measurement unit (IMU) on a handheld controller. These values cannot be directly mapped to the ray direction due to the phenomenon of sensor drift and the mismatch between the orientations of the physical controller and the virtual content. To address these challenges GyroWand 1) interprets the relative rotational values using a state machine which includes an anchor, an active, an out-of-sight and a disambiguation state; 2) handles drift by resetting the default rotation when the user moves between the anchor and active states; 3) does not initiate raycasting from the user's hand, but rather from other spatial coordinates (e.g. chin, shoulder, or chest); and 4) provides three new disambiguation mechanisms: Lock&Twist, Lock&Drag, and AutoTwist. In a series of controlled user studies we evaluated the performance and convenience of different GyroWand design parameters. Results show that a ray originating from the user's chin facilitates selection. Results also show that Lock&Twist is faster and more accurate than other disambiguation mechanisms. We conclude with a summary of the lessons learned for the adoption of raycasting in mobile augmented reality head-mounted displays.