In the pursuit of a fully autonomous learning agent able to interact, move, and be useful in the real world, two fundamental problems are path planning and motion control, and user-agent interaction. We address these through reinforcement learning using our Path Planning and Motion Controller (PPMC) Training Algorithm, which uses a combination of observable goals and randomization of goals during training, with a customized reward function, to teach a simulated quadruped agent to respond to user commands and to travel to designated areas. In this regard, we identified two critical components of path planning and motion control: the first is region enabled travel, or the ability to travel towards any location within a prescribed area; the second is multi-point travel, or the ability to travel to multiple points in succession. An important open ended question is how many tasks should be handled by a single policy and if a single policy can even learn to manage several tasks. We demonstrate that it is possible to contain both a maples path planner and motion controller on a single neural network, which could prove promising in future work due to their interlinked and synergistic nature. Using control group policies and various test cases and using ACKTR and PPO, we empirically validate our algorithm teaches the agent to respond to user commands as well as path planning and motion control.