This paper presents a novel application of remote sensing data and machine learning technologies for damage classification in a real-world cross-domain application. The proposed methodology trains models to learn the building damage characteristics recorded in the 2011 Tohoku Tsunami from multi-sensor and multi-temporal remote sensing images. Then, the trained models are tested in the recent 2018 Sulawesi Tsunami. Additionally, a simulation of high-resolution SAR image was carried to deal with missing data modality. Our initial results show that the ResNet-derived features from optical images acquired after the disaster together with moderate- and high-resolution synthetic aperture radar (SAR) post-event intensity data showed significant accuracy in classifying two levels of tsunami-induced damage, with an average f-score of approximately 0.72. Taking into account that no training data from the 2018 Sulawesi Tsunami was used, our methodology shows excellent potential for future implementation of a rapid response system based on a database of building damage constructed from previous majors disasters.