How to calculate if something lies in someone'

2020-07-22 16:58发布

I have a subject that has a position in a 2D space and a velocity, both represented by a vector. The subject has a field of vision of 135 degree to each side. It looks in the same direction as it is moving (the velocity vector).

I have objects with a position in a 2D space represented by a vector.

In the drawing the objects on blue background are visible, the objects on red background are not visible to the subject.

How can i calculate which objects lie in the subject's field of vision at a given moment?

Which objects lie in field of vision

2条回答
闹够了就滚
2楼-- · 2020-07-22 17:24

You just need to find the distance to the objects and the angle towards them. Then, check that the distance is not greater than the radius of that light blue circle, and the angle between the speed vector and the vector to the object is not too large.

The Euclidean distance is simply sqrt ((x2 - x1)^2 + (y2 - y1)^2) (link).

As for the angle, represent the vector (x2 - x1, y2 - y1) and the speed (vx, vy) in polar coordinate system, and then check that the absolute difference between the angles is no more than 135 degrees. In programming languages, there is often a convenient function atan2 (y, x) to find the polar angle of a single vector (link).


The polar angle of a point is measured from vector Ox in counterclockwise direction. Consider we have two points: endpoint of the velocity vector (vx, vy) (left pic) and endpoint of a vector from our object (x1, y1) to the object in question (x2, y2): it's the vector (x2 - x1, y2 - y1) (center pic). Let us say the polar angles are alpha and beta, respectively. Then the angle between (vx, vy) and (x2 - x1, y2 - y1) is the difference of these polar angles (right pic).

Figure 1 Figure 2 Figure 3

There's a trick however. The alpha and beta values you get are typically either from -PI to +PI, or from 0 to 2PI. So, the difference beta - alpha will be between -2PI and +2PI, and you'll want it be between -PI (180 degree clockwise from current direction) and +PI (180 degree counterclockwise from current direction). For that, some simple transformation will suffice, such as this pseudocode:

if angle > +PI:
    angle := angle - 2PI
if angle < -PI:
    angle := angle + 2PI
查看更多
我想做一个坏孩纸
3楼-- · 2020-07-22 17:38

A possible optimization assuming that every object's FOV needs to be tested at the same time (aka every object becomes a subject) is to use the Delaunay triangulation of the objects treated as a graph to allow for a BFS search of sorts. There are two restrictions on this BFS.

  1. Objects that are too far away from the subject are not expanded.
  2. Objects that are not within the FOV are also not expanded. There is an exception for cases where the edge of the object intersects at least one of the FOV angle lines.

The example below shows why this is important, as well as giving an example of an unlikely worst case using a narrow FOV and where the FOV distance is too large to have an effect. Without the exception, it would never reach the lone object in the FOV.

Narrow FOV worst case example

Quick Edit: Added numbers to the example image to make it clearer how the BFS search would work. The object labeled 1 is the nearest object. It is expanded which leads to objects labeled 2. Those objects are expended to resulting in objects labeled 3 and so on. The thicker lines show which edges (directed from lower to higher label) are used during the expansion. Thus only the left most object is not expanded.

The performance of the algorithm, if the FOV needs to be tested for every object is:

O(N log N + NT)

Where N is the number of objects, and T is the average number of objects tested. This is compared to an ideal output sensitive algorithm of O(N*V), where V is the average number of objects in the FOV. On average, O(T) should be O(V). The downside is that since everything is up to the FOV area, V is likely to be N/C, where C is some constant factor, making it technically O(N), but that constant could be very low indeed. One way to approximate C is "(area of the convex hull of the objects) / (area of the FOV)". The basis of which is that a fraction of the area of the convex hull is on average likely to contain a roughly equal fraction of the objects.

The psudeocode assumes that each of the objects (including the subject) are part of a class, for simplicity call it Object, that provides a few class methods and class members. The methods and members are summarized below:

Class Members:

  • pos: Objects 2D position
  • minDistFOV: Minimum FOV distance. Most likely 0.0 but may be higher.
  • maxDistFOV: Maximum FOV distance. Unknown, but algorithm assumes it is a fixed distance.

Class Methods:

  • isWithinFOV(Object): Determines if given Object is within this Object's FOV. Returns True if it is.

The below functions are also not included here but links are included as to how they can be coded. It also shouldn't be too difficult to find them online.

  • offset(point, angle, direction): Returns the 2D point resulting from offsetting a 2D point by a given angle and distance.
  • intersects(segment1, segment2): Determines if the two line segments intersect.

The pseudocode looks something like below.

function lineSegmentFOV(subject, angle):
    # This function gets the one of line segments that represent the edges of the FOV
    # That is one of the red lines in the image
    segmentNearPos = offset(subject.pos, angle, subject.minDistFOV)
    segmentFarPos = offset(subject.pos, angle, subject.maxDistFOV)
    return (segmentNearPos, segmentFarPos)

function findObjectsInFOV(subject, objects, triangulation, kdTree):
    objectsInFOV = new Set() # A set seemed like the best overall choice here
    checkedObjects = new Set()

    # Get subject's edge of FOV line segments
    halfFOV = subject.FOV / 2
    lowerSegment = lineSegmentFOV(subject, subject.dir - halfFOV)
    higherSegment = lineSegmentFOV(subject, subject.dir + halfFOV)

    # Check nearest object to subject
    nearestNeighbor = kdTree.nearestNeighbor(subject)
    if not subject.equals(nearestNeighbor): # Subject cannot be in it's FOV
        if subject.isWithinFOV(nearestNeighbor):
            objectsInFOV.add(nearestNeighbor)
        checkedObjects.add(nearestNeighbor)

    # Begin the search for objects within the FOV
    objectsToExpand = new Queue(nearestNeighbor) # Always expand subject's nearest neighbor
    while objectsToExpand.length > 0:
        object = objectsToExpand.dequeue() # Get the next object to expand
        if object not in checkedObjects: # Don't check an object twice
            # Find expandable objects and note those that are in the FOV as well.
            for adjacent in triangulation[object]:
                edge = (object.pos, adjacent.pos)
                # Check if object in FOV
                if subject.isWithinFOV(object):
                    objectsInFOV.add(adjacent)
                    objectsToExpand.enqueue(adjacent)
                # Check if object-adjacent edge intersects one of the FOV line segments
                else if intersects(edge, lowerSegment) or intersects(edge, higherSegment):
                    objectsToExpand.enqueue(adjacent)
                checkedObjects.add(adjacent)
    return objectsInFOV 

function findObjectsInAllFOVs(objects):
    triangulation= new DelaunayTriangulation(objects)
    kdTree = new KDTree(objects)
    allObjectsInFOVs = new Dictionary()

    # Cycle through each object to find other objects in it's FOV
    for subject in objects:
        allObjectsInFOVs = findObjectsInFOV(subject, objects, triangulation, kdTree)
    return allObjectsInFOVs
查看更多
登录 后发表回答