Control Systems For Projectile Defense

Control Systems For Projectile Defense

Ryan Mendivil

March 20, 2015

Abstract In this paper, I will describe various methods for defending against airborne projectiles. This includes tracking mechanisms for following objects in three dimensional space and predicting what paths they will take. In addition, methods of calculating interception trajectories and the factors involved will be discussed.

Contents

I Introduction 1

II Assumptions 2

III Model 2

1 Projectile Tracking 3 1.1 Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Calculating Trajectories 4 2.1 Line and Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Kalman Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Intercepting 5 3.1 Aim and Travel Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Solving For Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

4 Verification 7

5 Discussion 7

6 References 8

1

 

 

Part I

Introduction In our modern world, we live in relatively peaceful times. However, there are still may places with ongoing conflicts. Every year there are around hundreds of soldiers and civilian injured or killed in combat. A good portion of these casualties come from the use of projectile based weaponry. When used properly, projectiles can be very accurate and cause minimal collateral damage. Unfortunately, this is not the case in many modern war zones. Projectiles are launched with the intent to cause harm to anyone from the opposing side, be they civilians or soldiers. Producing systems that can properly prevent this kind of attack is of vital importance. There are already many systems on the market but most are extremely expensive and proprietary. Producing open systems that can be used across a wide range of hardware would be beneficial to all. This paper will discuss a general model of control that can be applied to projectile tracking and defense.

Part II

Assumptions • The projectile is within tracking distance of our system and is distinguishable from it’s surroundings.

• The system will have a clean line of sight to the enemy projectile when it fires.

• A hit will be considered successful when both the enemy projectile and our intercepting projectile occupy the same space.

• The time it takes the system to aim is constant or is set to large enough to account for any possible amount of movement.

• The accuracy and sampling rate of the projectile tracker is high enough to provide reasonable data.

• The acceleration of the enemy projectile is constant.

• The velocity of our projectile is consistent and the time it will take to reach the point of interception is calculable.

• The force of gravity on both the enemy and intercepting projectiles is constant.

• Since the effects of wind and air resistance depend heavily on the type of projectile, we will be ignoring them for the sake of simplicity.

Part III

Model In order to properly model and build a functional control system, we will need three different components. First, we need a method of gathering data about where the enemy projectile is located over time. The two systems discussed in this paper will be radar and video cameras. Both have their own advantages and disadvantages and require different models in order to work. Secondly, we can develop a model for the projectile’s position, velocity and acceleration. This allows us to predict where it will be at some time in the future and determine the accuracy of that predation. Finally, once we’ve collected and analyzed enough data to have a high accuracy, we can launch our own projectile. This will need to take into account aim time and travel time to calculate an ideal point.

2

 

 

1 Projectile Tracking

The purposes of this step is to take raw environmental data and turn it into a series of points plotted in Cartesian space. These points represent the trajectory of the enemy projectile and are made up of a list of points (x0,y0,z0)..(xn,yn,zn) where n is the number of positions sampled and a time difference ∆t which is the time difference between each sample.

1.1 Radar

For tracking objects at both long and short distances, radar is rather ubiquitous. It works by sending out radio waves from a transmitter. These waves reflect off objects father out and return to be collected in a receiver. The location of these objects can then be calculated based on strength of the signal and angle of the detector. The equation for received signal power for radar is shown below. [5]

S = PG2λ2σ

(4π)3R4

S = Receiving signal power (watts)

P = Transmission signal power (watts)

G = Radar antenna gain

λ = Wavelength used (meters)

σ = Target cross section (meters2)

R = Distance to target (meters)

Rewriting for distance (R), we get:

R = 4

√ PG2λ2σ

(4π)3S

In addition to the distance, we need to know it’s orientation. The rotation of the detector around it’s base will be represented by θ ∈ [0, 2π]. It’s vertical angle from a vertical line normal to the earth will be represented by φ ∈ [0, π

2 ]. Therefore, the location of an object in spherical coordinates can be represented as the vector

< θ,φ,R >. Transforming to cartesian coordinate system gives < Rsin(φ)cos(θ),Rsin(φ)sin(θ),Rcos(φ) >. Overall, we can model The position the projectile in < x,y,z > as:

4

√ PG2λ2σ

(4π)3S < sin(φ)cos(θ),sin(φ)sin(θ),cos(φ) >

The distance of an object will be measured every time the detector sends/receives a signal from it. The rate of sending a signal in a single direction is the time it takes a radar dish to complete one full rotation. Taking the speed of rotation of the detector around it’s base to be ∆θ we can calculate ∆t:

∆t = 2π

∆θ

3

 

 

1.2 Camera

Similar to Radar, cameras take in electromagnetic waves to perform detection. However, rather than mea- suring distance they measure the spectra of light received. This can provide a much more accurate picture of what you’re dealing with than radar. However, due to shorer wavelengths it doesn’t have the range that radar does. It also provides only two of the three needed coordinates to locate an object in space. This requires two cameras at two different locations in order to measure depth. Alternatively, it can be used in conjunction with a distance tracking system such as radar.

In order to map the image from the camera to it’s location in the real world, we need to know the following.

f = Camera Focal Length (meters)

R = Distance to Target (meters)

FPS = Frames Per Second ( 1 seconds

)

Xc = Horizontal distance from center (meters)

Yc = Vertical distance from center (meters)

If we assume that the camera is attempting to stay ”locked on” to the target and minimize Xc and Yc then we can approximate the location of points as follows:

< x,y,z >= R

f < Xc,Yc,f >

This increases the coordinates by a scaling factor of R f

, shifting them to the size they are in the real world at the location of our projectile while still maintaining the same proportions.

The time difference is simply one over the frame rate.

∆t = 1

FPS

2 Calculating Trajectories

Once a projectile has has been detected, it needs to be tracked to determine it’s trajectory. Assuming we have it’s locations over time, there are a few different methods of predicting where it will go.

2.1 Line and Curve Fitting

Our projectile is most likely accelerating due to gravity or other factors. Therefore we will need a second degree polynomial to model it.

x = at2 + bt + c

But, we’re working in three dimensions so each dimension needs to be given it’s own equation.

< axt 2 + bxt + cx,ayt

2 + byt + cy,azt 2 + bzt + cz >

This isn’t any more complicated than fitting a single curve, it just needs to be done three times. This is done using the plot command in matlab and is fairly efferent. However, it’s not ideal when we don’t have a lot of data to work with.

4

 

 

2.2 Kalman Filtering

One of the most popular software libraries for computer vision is OpenCV [3]. It contains functions for tracking moving objects over time and determining predicting their location. One of the features it contains is an implementation of Kalman Filtering.

[1]

This allows you to take incoming data and process it to perform object tracking. The advantage of this over fitting a curve is that it’s a continuous process. Rather than taking a list of points and fitting them, it predicts a new state xk based on existing state xk−1, a model Fk, control input Bk, a control vector uk and process noise wk. These are then used to output predictions of how the system will behave in the future. Unfortunately due to time constraints, I was unable to upgrade the curve fitting or tracking to use Kalman Filtering. However, this is an interesting area of ongoing research in the military [4] that would be good project for the future.

3 Intercepting

3.1 Aim and Travel Time

Our system is going to need time to turn to the right trajectory, arm it’s projectile, fire it and then allow time for it to reach the target point. We aren’t just calculating a location for interception, we’re calculating for a point in the future.

Delay = AimTime + FireTime + TravelTime

AimTime and FireTime are constant, but TravelTime requires trajectory and velocity. We can estimate the travel time by taking an arc length integral where P(t) is the trajectory and V is velocity.

TravelTime = 1

V

∫ ti 0

| P ′(t) | dt

3.2 Solving For Trajectories

Unfortunately we’re caught in a bit of a chicken and egg problem here. Travel time depends on velocity which depends on our trajectory which depends on travel time. Also, we need to consider the fact that most launchers taking in angles and velocity in order to fire rather than a ”target” at some point in the distance.

5

 

 

Dealing with this problem solely in Cartesian coordinates will make things more difficult when we’re sending data to our defense system. Therefore we need to transform our model into spherical coordinates. Using the control variables of θ, φ and v we can calculate P(t).

< t∗v ∗sin(φ) ∗ cos(θ), t∗v ∗sin(φ) ∗sin(θ), t∗v ∗ cos(φ) −g ∗ t2 >

We can take advantage of some features of spherical coordinates and solve for θ

θ = tan−1( yE xE

)

And solve for φ in terms of v

φ = cos−1( zE + g ∗ t2

t∗v )

vE ∗ cos(φE) ≤ v ≤ vmax When we solve these equations, we have to take into account the constraint of ti > Delay. Normally

this integral of path length comes out very ugly and difficult to work with. However, if we model our own projectile’s trajectory so that it takes a direct path (v >> g) then we can ignore the force of gravity and reduce the complexity drastically.

TravelTime = 1

v

∫ ti 0

Needs help with similar assignment?

We are available 24x7 to deliver the best services and assignment ready within 3-4 hours? Order a custom-written, plagiarism-free paper

Get Answer Over WhatsApp Order Paper Now