Daqing Yi

Machine-Dominant Human Intelligence

31 Dec 2015

I had a roadtrip to north California on Christmas. We used the Google map for planning and navigation from a location to another. The drives were tedious and long. Almost all of the turns and lane changes were instructed by what the app told. I felt my body was in-the-loop of an autonoums driving system. Such a system consists of

  • a high-level planner (the Google map app),
  • a low-level actuator (the car), and
  • a middle-level controller (myself).

It shapes a machine-dominant human intelligence, for a human’s decision making is determined by the planning in a machine. The structure of this human-machine system led to many annoying experiences in the trip.

Conflicts are created due to the lack of behavior diversity. It is known that

all roads lead to Rome.

But the Google map app only provides the optimized roads to “Rome”. When all the drivers who go to Rome run on the exactly same routine, there comes a traffic jam. Some drivers might be flexible to make a different turn for a different routine. But in a strange city, most of them tend to follow what are instructed. On the way to the Lombard Street, I could tell many cars were going to the same destination, because we made same turns at all the crosses. Together, we were all trapped in a jam in a street that was two blocks away from the destination. Though I could see that there might be a different routine in the map, I did not dare to try it. The predicted delay was very inaccurate in a traffic jam, which made me doubt whether the traffic on a different rountie could be better.

Obviously, the driving happened in a multi-agent environment. Conficts occurs when all the agents share the same behavior in the same workspace. As a result, the optimized rountine becomes non-optimal with the impact from the others. This can be solved by importing diversity to the behaviors of the agents. The Google map app has not yet provided any scheduling in the planning procedure. In a centralized approach, the planning on the server side can consider the routines of all the relevant vehicles. In a distributed approach, the planning can include modeling the behaviors of other vehicles by game theory.

Incorrect information is spreaded. We had “big sur” as one stop in the Highway 1 drive. We thought it was a spot location before the drive. The Google map did indicate a location to go and a routine was planned. We were thus pulled off the highway and got into a muddy road in order to get to the “big sur” point on the Google map. Not surprisingly, we encountered several others following the same stupid routine. I even heard someone said this way led to the “big sur”. We were all misled by the incorrect information in the map. As more and more drivers are using the Google map, the incorrect information can be spreaded quickly. I also had similar experiences about one-way lane and etc.

A few incorrect information is brought due to a machine could not understand a user correctly. If the Google map knew that I wanted to visit a region instead of a human-labeled point with the same name, a different routine should be obtained. Therefore, that the planning or the decision making would not be dominant by a machine intelligence could be avoided by adding a human level above the planner level, . The import of this human level would increase the fault tolerance.

There are going to be more and more applications that humans’ decision makings reply on machines’ decisions, and even are dominated by machines’ decisions. But I believe the ultimate goal of human-machine interaction is having human-dominant machine intelligence.