We present a novel approach for interactive multimedia content creation that establishes an interactive environment in cyberspace in which users interact with autonomous agents generated from video images of real-world creatures. Each agent has autonomy, personality traits, and behaviors that reflect the results of various interactions determined by an emotional model with fuzzy logic. After an agent's behavior is determined, a sequence of video images that best match the determined behavior is retrieved from the database in which a variety of video image sequences of the real creature's behaviors are stored. The retrieved images are successively displayed on the cyberspace to make it responsive. Thus the autonomous agent behaves continuously. In addition, an explicit sketch-based method directly initiate the reactive behavior of the agent without involving the emotional process. This paper describes the algorithm that establishes such an interactive system. First, an image processing algorithm to generate a video database is described. Then the process of behavior generation using emotional models and sketchbased instruction are introduced. Finally, two application examples are demonstrated: video agents with humans and goldfish.