Reservoir computing is a machine learning paradigm that was proposed as a model of cortical information processing in the brain. It processes information using the spatiotemporal dynamics of a large-scale recurrent neural network and is expected to improve power efficiency and speed in neuromorphic computing systems. Previous theoretical investigation has shown that brain networks exhibit an intermediate state of full coherence and random firing, which is suitable for reservoir computing. However, how reservoir performance is influenced by connectivity, especially which revealed in recent connectomics analysis of brain networks, remains unclear. Here, we constructed modular networks of integrate-and-fire neurons and investigated the effect of modular structure and excitatory-inhibitory neuron ratio on network dynamics. The dynamics were evaluated based on the following three measures: synchronous bursting frequency, mean correlation, and functional complexity. We found that in a purely excitatory network, the complexity was independent of the modularity of the network. On the other hand, networks with inhibitory neurons exhibited complex network activity when the modularity was high. Our findings reveal a fundamental aspect of reservoir performance in brain networks, contributing to the design of bio-inspired reservoir computing systems.