Object detector is an indispensable component in many computer vision and artificial intelligence systems, such as autonomous robot and image analyzer for profiling social media users. Analyzing its vulnerabilities is essential for detecting and preventing attacks and minimizing potential loss. Researchers have proposed a number of adversarial examples to evaluate the robustness of object detectors. All these adversarial examples change pixels inside target objects to carry out attacks but only some of them are suitable for physical attacks. According to the best knowledge of the authors, no published work successfully attacks object detector without changing pixels inside the target object. In an unpublished work, the authors designed an adversarial border which tightly surrounds target object and successfully misleads Faster R-CNN and YOLOv3 digitally and physically. Adversarial border does not change pixels inside target object but makes it look weird. In this paper, a new adversarial example named adversarial signboard, which looks like a signboard, is proposed. By putting it below a target object, it can mislead the state-of-the-art object detectors. Using stop sign as a target object, adversarial signboard is evaluated on 48 videos with totally 5416 frames. The experimental results show that adversarial signboard derived from Faster R-CNN with ResNet-101 as a backbone network can mislead Faster R-CNN with a different backbone network, Mask R-CNN, YOLOv3 and R-FCN digitally and physically.