The deficiency of 3D segmentation labels is one of the main obstacles to effective point cloud segmentation, especially for wild scenes with varieties of different objects. To alleviate this issue, we propose a novel graph convolutional deep framework for large-scale semantic scene segmentation in point clouds with solely 2D supervision. Different with numerous preceding multi-view supervised approaches focusing on single object point clouds, we argue that 2D supervision is also capable of providing enough guidance information for training 3D semantic segmentation model of natural scene point clouds while not explicitly capturing their inherent structures, even with only single view per sample. Specifically, a Graph-based Pyramid Feature Network (GPFN) is designed to implicitly infer both global and local features of point sets, and a perspective rendering and semantic fusion module are proposed to provide refined 2D supervision signals for training along with a 2D-3D joint optimization strategy. Extensive experimental results demonstrate the effectiveness of our 2D supervised framework which achieves comparable results with the state-of-the-art approaches trained with full 3D labels for semantic point cloud segmentation on the popular S3DIS benchmark.