Mengqing JiangMaster of Science, Computer Vision
School of Computer Science
Carnegie Mellon University
Email: jiangmengqing1121 AT gmail.com
[Publications] [Projects] [Education] [Experience]
Hi! I am currently a first-year graduate student at Carnegie Mellon University. I obtained my B.Eng. in Software Engineering from Tsinghua University. During my undergrad study, I took a research internship at Berkeley Deep Drive advised by Prof. Trevor Darrell. Before that, I have worked with Prof. Sung Kim remotely for a year and visited HKUST twice. I also had a great time interning at Momenta on autonomous driving, and SenseTime Inc. on computer vision.
My interests lie in computer vision, robotics and their applications. I will be interning in Uber ATG's perception team for Summer 2019.Download my CV
|MPilot algorithm simulation on UE4 with CarSim plugins|
|Mengqing Jiang, Zizhe Xu, Sibo Jia
Mar 2018 - Jun 2018, Momenta
In order to conduct large-scale automated safety tests for the auto-pilot algorithms of Momenta, called MPilot, I developed a pipeline for Lincoln MKZ dynamics simulation on Unreal Engine 4 with CarSim plugins, reproduced the algorithm in the simulator, and created testing scenarios, such as the front car sharply decelerate and left/right side car cut in. Moreover, I also investigated active sub-lane changing algorithms for autonomous car in the case of traffic congestion and did experiments in this simulator.
|Imitated Control for Vehicle Pedestrian Interaction|
|Mengqing Jiang, Nathan Lambert, Fisher Yu, Anca Dragan, Trevor Darrell
July 2017- Nov 2017, UC Berkeley
In this work, we present a regression model acting as a vehicle controller for the interaction between a vehicle approaching and a pedestrian crossing an intersection. Vehicle and human are detected by clustering the LIDAR point cloud. Using a LSTM network, the vehicle predicts the desired velocity even through mis-identification of point-cloud data. This prediction model demonstrates the use of LSTM for spotty data and future trajectory planning applications.
|Experimental Platform And Visualization Dashboard on ROS for Self-driving Car|
|Mengqing Jiang, Gray Chen, Yujia Luo, Fisher Yu
July 2017- Oct 2017, UC Berkeley
To provide a better platform for conducting and debugging experiments on the unmanned vehicle, we implement a bunch of visualization tools and integrate them into one dashboard. The tools can view camera images (including original ones and object detection bounding boxes), check the control signal values and plot charts, visualize the LIDAR point cloud, plot driving track on Google maps.according to the GPS information. Moreover, we add timeline for rosbag player so that the messages in bag files can be easily rewinded and checked.
|Mengqing Jiang, Xiaodong Gu, Sung Kim, Chunping Li
Sep 2016-Aug 2017, Tsinghua University, Hong Kong University of Science and Technology
It has been a typical and essential task to transforming a hand-written doodle into HTML/CSS in order to build customized websites and arrange their layouts, since it is easier and faster for designers to illustrate ideas drawing on canvas, which is also very challenging. In this work, we use recent advances in Image Caption and represent an end-to-end deep neural model with a CNN-encoder and an RNN-decoder, translating a web page layout into HTML code that displays as the given image after browser rendering. Accordingly, two datasets are built for this task: a large-scale program-generated web screenshot dataset and a collected hand-written web doodle dataset. The proposed model performs well on both datasets and outputs code with high quality and high accuracy, achieving 66.45 and 49.21 BLEU score respectively with no HTML grammar errors. In addition, experiments have proved that transfer learning from the large-scale screenshot dataset strongly enhances the model’s performance on the doodle dataset. Moreover, web application named "Doodle2Code" has been developed, allowing users to translate their doodles into HTML code and modify the generated code until seeing the exact web page they wanted online, on which a user study is conducted.
[thesis (in Chinese)]
|Residual Attention Network for Image Classification|
|Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang
Aug 2016-Nov 2016, SenseTime Inc.
In this work, we propose “Residual Attention Network”, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion, which is built by stacking Attention Modules generating attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers.
|WeLearn (WeChat App)|
|Zhaoyang Li, Yonghe Wang, Mengqing Jiang, Bin Liu
Nov 2016-Dec 2016, Tsinghua University
We implemented a mobile web application based on WeChat (most popular SNS application in China). After timely crawling course assignments, announcements, etc. from Tsinghua WebLearning Website, WeLearn provides user-friendly interface, including dashboard, calendar and so, for students to check the homework, announcements, lectures and other information, as well as sends notifications to alert important deadlines. Beside, we integrated the "Team Finder" function into WeLearn for students to find project group members online.
|On The Road (Webpage Game)|
|Mengqing Jiang, Yonghe Wang
May 2016, Tsinghua University
[blog(in Chinese)][code] [demo]
All boundaries are conventions, waiting to be transcended.
One may transcend any convention if only one can first conceive of doing so.
― David Mitchell, Cloud Atlas