Deep neural networks (DNNs) have achieved impressive performance in many domains such as computer vision, natural language processing, speech, and robotics, etc. However, DNNs are shown to be susceptible to some types of attacks, including adversarial attacks and backdoor attacks, which can seriously impact people’s lives in security- and safety-sensitive applications such as self-driving cars. In this talk, I present my recent works on the trustworthiness of DNNs. I first discuss how to defend against evolving adversarial attacks that may not be known at training time. Next, I introduce a simple defense against physical adversarial attacks for DNN-based object detectors. Finally, I share a discovery about the trade-off between the adversarial robustness and backdoor robustness of DNNs. The findings suggest that future research on defense should take both adversarial and backdoor attacks into account when designing algorithms or robustness measures to avoid pitfalls and a false sense of security.