Exploring XPeng’s Self-Driving Tech Live â 1 Hour+, No Interventions!
I recently went on an exclusive test drive of XPeng’s “City Navigation Guided Pilot” (CNGP) in the city of Guangzhou, China — virtually. I rode in the car via what was basically a Zoom call (Chinese version of Zoom) alongside an XPeng engineer, PR people, and a “driver.” They had 4 cameras set up to show me different angles, views, or screens so that I could get as close to a real-world sense of the drive as possible. It was no “5D experience” for me, but it was actually closer to riding in the car than I thought it would be. I sensed my body reacting to certain portions of the drive more than I expected.
Overall, the general takeaway is that I expected to be impressed with CNGP because of video footage I examined previously, but it clearly exceeded my expectations and I might even say blew me away. We drove through heavy and sometimes chaotic city traffic going into the center of Guangzhou and then back out for over one hour (~1 hour and 7 minutes) and the driver didn’t have to disengage CNGP once! Furthermore, there was no instance where it seemed he’d have to disengage. The drive appeared to be extremely smooth throughout — much smoother than I expected from my experiences with ADAS (advanced driver assistance systems) or from other automakers. There was actually nothing that I can firmly say needed improved.
Of course, we have to acknowledge that being on a virtual drive is not the same as being a driver or passenger in the real world. Perhaps some segments of the drive would have seemed more difficult or jarring than they did virtually. Though, if you watch the whole video yourself, I think it’s clear that the system drives very smoothly, cautiously, and intelligently. There are several difficult scenarios in which the car handles the situation as well as I’d want from any driver, human or robot.
[embedded content]There are a lot of interesting points made in the comments under the video. I’ll come back to those at the end of this article. First, I want to highlight various notable segments of the drive.
At 2:45, the car makes a U-turn. In the process, a couple of motorbikes go in front of our car, one from the opposite direction, and the XPeng CNGP system seems to respond ideally to those challenges, eventually making the U-turn in a safe way.
Just after 5:15 in the video, a minivan cuts right in front of us. I think many driver-assist systems would hit the brakes a bit hard there, which is not pleasant for passengers, but the XPeng system seemed to do a great job of identifying the risk, avoiding it by slowing down, but not overreacting and hitting the brakes too hard. I like it.
At 5:43, there’s a road janitor that appears just inside our lane next to a concrete wall. Again, I think many systems (perhaps including my own Tesla FSD system) would react a bit harshly in that scenario, but the XPeng system doesn’t overreact, slowing down a bit and then going around the man safely while cars are driving faster in the lane on our right. The challenge is superbly met and addressed.
At about 7:21, we are driving at 36 km/h (22 mph) when a bus pulls out in front of us. Yet again, the XPeng CNGP system smoothly faces the problem, brakes slowly rather than harshly, doesn’t beep at us or make us take over, and then proceeds calmly but firmly like a human driver would.
(Frankly, I note at this point that, personally, I would not even feel comfortable testing Tesla FSD in this type of environment.)
At 10:53, a car on our right starts turning toward us, toward our lane. Our car notices, but rather than act crazy and scare everyone in the car, it just slows down gradually and leaves enough space that the car on the right can eventually turn into our lane in front of us. That then happens similarly with a second car that wants to turn into our lane.
If you go to 24:30 in the video, you can see the XPeng needs to change lanes, has a fairly short window to do so, and is surrounded by a lot of traffic. Nonetheless, the self-driving car implements the lane change perfectly, probably better than I would have.
It was just after that as well that I asked about the voice assistant, which was saying what the car would do and also warning the driver about things from time to time. It crossed my mind that that was a very useful safety feature of the car to help make sure the driver is continuing to pay attention. It also can help explain to the driver what is happening in a scenario where that person does not automatically notice where the car is turning, changing lanes, steering around something, etc. and why it may be doing so. To me, this helps makes the driver more comfortable and more likely to trust the system and leave it in operation. This is a feature that I think it’d be great to have as an option on Tesla FSD, and as I point out in the video, having that would help me to be more patient and avoid disengaging, which would help me to better explore the limits of the FSD Beta system.
At 33:30, the car is on a curving roadway in which a bunch of cars are merging in front of it from both sides, and it handles that challenge superbly, seemingly as smooth as it could.
At about 43:50, a car decides to merge into the XPeng car’s lane right in front of us. Whereas a less polished self-driving car might slam on the brakes too quickly there and jar the passengers, the XPeng responded in a smooth fashion and gave no real indication that it was being driven by a computer rather than a human.
At 48:10, the car needs to merge into traffic on a pretty busy road and it again does so smoothly and seamlessly
A little after 55:30, they mention that a future version of CNGP will be able to drive the car through parking garages, not just on public roads.
At 1:00:15, the car has a concrete wall on the left side right beyond the left white lane marking. A fisherman with what looks like a stroller appears on the side of the road there, partially in the driving lane. The XPeng CNGP system smoothly goes around the person, even inching into the lane on the right a little bit to leave enough space next to the human. Other cars are driving in that lane on the right and surpassing our car, making for quite a tricky scenario, but the XPeng self-driving system handles the scenario brilliantly and doesn’t slow too much, jerk the car, or go into the path of the cars speeding up from behind on the right. The maneuver and speed decisions are superbly executed.
It is these kinds of unexpected, odd scenarios that form “long-tail edge cases” that the self-driving software needs to learn to navigate. That’s what makes driving so hard sometimes, including for computers. The good news is that Guangzhou has plenty of odd edge cases to learn from, and the system gets more and more natural avoiding people and obstacles as a result.
Several times during the drive, they mention their strong focus on not just getting the car to follow the rules and drive correctly but to also drive more and more like a human in a smooth and predictable way.
Comments from the Crowd
In the comments under the video on YouTube, “Treelon” writes, “Thank you for the peek at what china is up to but obviously mapped + lidar and not that impressive result with those handy caps, main part should be vision and back up with lidar if you really want but true way is end to end vision only approach to catch ’em all.” I used to subscribe to the same thought, especially contemplating a generalized approach that would work much more quickly everywhere. However, my experience with vision-only self-driving has led me to the belief that that is not going to be adequate. For now, at least, it seems that this lidar + vision + radar approach leads to much smoother, more trustworthy, and more enjoyable “computer self driving.” But we will see if I change my mind in 6 months or so.
There are several comments about the system relying on pre-mapping of the area. It’s a fair critique or note for sure, but at the end of the day, I like a system that works smoothly and very effectively. It seems to me that XPeng’s system is as good as it gets for this level of advanced driver assistance.
Ian Davies writes, “Mapping means it’s not a general solution, nothing like Tesla. Plus 4G or whatever. It can be a solution for CBD / inner city — swamp one city at a time.” Indeed. At the end of the day, though, if they are able to tackle a few dozen large cities in this way, that’s a large number of people. I think that approach can scale well and appropriately tackle one market after another. We’ll see. It’s all about being cost-competitive with a compelling product, and we’ll all have to wait to see how different approaches to true full self driving pans out on a large scale.
Let us know what you think down in the comments.
Appreciate CleanTechnica’s originality and cleantech news coverage? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.
Don’t want to miss a cleantech story? Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Advertisement
This post has been syndicated from a third-party source. View the original article here.