> **来源:[研报客](https://pc.yanbaoke.cn)** # Threat assessment model in air defense systems using Artificial Neural Networks Salih Taşdemir<sup>1*</sup>, Murat Atan<sup>2</sup> $^{1}$ Department of Econometrics, Hacı Bayram Veli University, Türkiye $^{2}$ Department of Econometrics, Hacı Bayram Veli University, Türkiye *Corresponding author E-mail: salihtasdemir35@hotmail.com Received: Dec. 25, 2025 Revised: Jan. 11, 2026 Accepted: Jan. 13, 2026 Online: Jan. 13, 2026 # Abstract This study aims to automate threat assessment and target assignment processes in air defense systems using a dynamic, learning artificial intelligence-based model. Unlike threat assessment studies in the literature that use different criteria and methods, this study integrates missing data completion, multi-criteria analysis, and artificial neural networks to dynamically update the threat score. Furthermore, unlike studies in the literature, the number of criteria used has been increased to enable the model to provide a broader perspective. Most studies are static and use a small number of criteria; this study presents a dynamic, multi-criteria model that can handle incomplete data. The developed Geometric Threat Score proposes an average perspective for threat assessment, which varies depending on individuals and geographical conditions. The model generates threat scores using criterion data obtained from radars and sensors and can respond adaptively to changing conditions. The results achieved demonstrated high performance with mean square errors (MSE) of 0.0005–0.0072 and a correlation coefficient (R) above $95\%$ . This approach accelerates decision support processes in air defense systems, reducing human influence and increasing system effectiveness. © The Author 2026. Published by ARDA. Keywords: Threat assessment; Artificial neural networks; Air defense systems; Multi-criteria decision making; Missing data completion. # 1. Introduction Threat assessment and weapon assignment in air defense systems are among the most critical processes for decision-makers in a combat environment [1]. These decisions are often made within limited time frames and re-require considering multiple variables simultaneously. Therefore, artificial intelligence-based approaches are gaining importance to reduce human-related errors and create faster decision-making mechanisms. Traditional methods often struggle to dynamically adapt to evolving battlefield conditions and new threats [2]. This paper proposes a novel and dynamic threat assessment model that uses an Artificial Neural Network (ANN) to prioritize air targets. The model was trained on a comprehensive dataset synthesized from literature and expert opinions, covering 26 different threat criteria. A key innovation is the development of the "Combined Geometric Threat Score," which aligns threat values obtained from the literature with a weighted score based on the importance of the criteria, forming a solid foundation for ANN training. The experimental results demonstrate the model's high performance with an R-value around 0.95 for regression across various data splits and a low mean square error (MSE), highlighting its accuracy in predicting threat levels. It has been concluded that the AI-focused approach can significantly increase decision-making speed and accuracy, reduce human error, and provide a scalable framework for automatic threat prioritization in network-centric air defense systems. This study takes into account more criteria in order to fill the gap in the literature. A comprehensive data set consisting of 26 criteria has been used. It provides an average perspective on threat assessments that may vary depending on geographical conditions or individuals, identifies the relationship between the criteria for the first time, performs missing data completion, and offers a more verifiable training set in a dynamic structure. # 2. Literature Review It has been stated that the threat assessment concept and the weapon assignment model should be implemented, and that this should be evaluated using sensor information from systems on an interconnected network [3]. A review of the literature reveals that different numbers of different criteria are used and different threat assessment methods are employed. Studies in the literature have grouped threat assessment methods under four main headings, as listed below. It has been stated that the combined use of these methods and the provision of visualization will improve performance [4]. Rule-Based Fuzzy Logic: Also defined as gray relational analysis (GRA). It analyzes the relationship between threats using gray system theory. As the number of criteria increases, so does the number of rules. Since too many rules are required, evaluation can be performed with smaller-scale criteria. Expert opinions are required for accuracy. The results obtained here are important in terms of their use in the testing and training model of artificial neural networks. Bayesian Networks and Stochastic Methods: They evaluate threats using probabilistic inference. They are useful for uncertain data, but require a large data set for training. It is very difficult to clearly determine the strike effectiveness of systems. A sufficient number of strikes and successful outcomes must be obtained. Generating such data is very difficult due to confidentiality concerns and the lack of sufficient samples. Multi-Criteria Decision-Making Methods: When applying multi-criteria decision-making methods, in cases where weights are unknown in the literature, ranking superiority methods such as the Borda Method, Condorcet Method, and Basic Lexicographic Method are available. Methods for determining the criterion weights necessary for calculation also play an important role. The most preferred methods for determining criterion weights are the Simple Cardinal Method, the Analytic Hierarchy Process (AHP), the Critic Method, and the Entropy Method are among the most preferred methods. Once the weights have been determined, it is also important to compare the weights obtained. Kullback-Leibler Divergence and Mean Absolute Error Methods are used to compare the obtained weights. When weights are determined, TOPSIS, ELECTRE, PROMETHE, Permutation, DEMATEL, and MAUT methods are frequently preferred for ranking or selection processes. Most of these methods require the accuracy of expert opinions. The results determined by multi-criteria decision-making methods can be used as training data for artificial neural networks. Artificial Neural Networks: The model architecture section, consisting of input, output, and hidden layers, the training section comprising the dataset, preprocessing, optimization, and validation stages, and the mathematical formulation. Criteria are used as input parameters. The number of layers is determined by the activation function in the hidden layer. The desired result is specified in the output layer. It is necessary to know a sufficient number of output results for the start. It is important that the data set consists of calculated, validated results. Normalization and missing data completion are performed in the preprocessing section. In the optimization section, the training, validation, and testing ratios are determined. The learning rate, number of hidden layers, and number of hidden layer neurons are optimized. The number of iterations or the stopping success rate is determined with the learning function. After finding good ratios and numbers for MAPE and R-squared values, the artificial intelligence model is created. Studies in the literature according to the number of criteria and the method used are shown in the Table 1. Table 1. Studies in the literature according to the number of criteria and method used <table><tr><td>Number of criteria</td><td>Rule-based fuzzy logic</td><td>Bayesian networks and stochastic methods:</td><td>Multi-criteria decision-making methods</td><td>Artificial Neural Networks</td></tr><tr><td>1</td><td></td><td></td><td>Reference [5]</td><td></td></tr><tr><td>2</td><td></td><td></td><td></td><td>Reference [6]</td></tr><tr><td>3</td><td>Reference [7]</td><td>Reference [8]</td><td>Reference [9]</td><td>Reference [10]</td></tr><tr><td></td><td>Reference [10]</td><td></td><td></td><td>Reference [11]</td></tr><tr><td></td><td>Reference [12]</td><td></td><td></td><td></td></tr><tr><td></td><td>Reference [13]</td><td></td><td></td><td></td></tr><tr><td>4</td><td>Reference [14]</td><td>Reference [15]</td><td>Reference [16]</td><td>Reference [17]</td></tr><tr><td></td><td>Reference [18]</td><td></td><td>Reference [19]</td><td>Reference [20]</td></tr><tr><td></td><td></td><td></td><td></td><td>Reference [21]</td></tr><tr><td>5</td><td>Reference [22]</td><td></td><td>Reference [23]</td><td>Reference [24]</td></tr><tr><td></td><td></td><td></td><td>Reference [25]</td><td></td></tr><tr><td>6</td><td>Reference [26]</td><td>Reference [27]</td><td>Reference [28]</td><td></td></tr><tr><td></td><td>Reference [28]</td><td></td><td>Reference [29]</td><td></td></tr><tr><td></td><td>Reference [30]</td><td></td><td>Reference [32]</td><td></td></tr><tr><td></td><td>Reference [31]</td><td></td><td></td><td></td></tr><tr><td></td><td>Reference [33]</td><td></td><td></td><td></td></tr><tr><td></td><td>Reference [34]</td><td></td><td></td><td></td></tr><tr><td></td><td>Reference [35]</td><td></td><td></td><td></td></tr><tr><td>7</td><td>Reference [36]</td><td>Reference [37]</td><td>Reference [38]</td><td>Reference [39]</td></tr><tr><td></td><td></td><td></td><td></td><td>Reference [40]</td></tr><tr><td></td><td></td><td></td><td></td><td>Reference [38]</td></tr><tr><td>8</td><td>Reference [41]</td><td>Reference [42]</td><td>Reference [43]</td><td>Reference [44]</td></tr><tr><td></td><td>Reference [43]</td><td></td><td></td><td></td></tr><tr><td>9</td><td></td><td>Reference [45]</td><td>Reference [46]</td><td></td></tr><tr><td></td><td></td><td></td><td>Reference [48]</td><td></td></tr><tr><td>10</td><td>Reference [47]</td><td></td><td></td><td></td></tr><tr><td>11</td><td>Reference [49]</td><td>Reference [50]</td><td></td><td></td></tr><tr><td>12</td><td>Reference [51]</td><td></td><td></td><td></td></tr><tr><td>13</td><td>Reference [52]</td><td></td><td></td><td></td></tr><tr><td></td><td>Reference [53]</td><td></td><td></td><td></td></tr><tr><td>16</td><td></td><td></td><td>Reference [54]</td><td>Reference [55]</td></tr><tr><td></td><td></td><td></td><td>Reference [56]</td><td></td></tr><tr><td>17</td><td></td><td>Reference [57]</td><td></td><td></td></tr><tr><td>18</td><td>Reference [58]</td><td></td><td>Reference [59]</td><td></td></tr><tr><td></td><td>Reference [60]</td><td></td><td></td><td></td></tr><tr><td>22</td><td></td><td></td><td>Reference [61]</td><td></td></tr><tr><td>55</td><td></td><td></td><td></td><td>Reference [62]</td></tr></table> Threat perception and reaction time may vary from person to person [1]. It has been noted that threat prioritization may differ depending on the individual or geographical conditions, and that reaction time may vary [63]. This is due to factors such as the capacity, experience, knowledge base, length of hierarchical approval time, and delegation of authority of the individuals performing this task. It has been stated that a decision support system is needed to reduce the initial uncertainty in threat assessment and that priorities must be correctly identified for this purpose [64]. It has been stated that threat priority zones and threat priorities must be determined [65]. It has been stated that communication on the command and control network, the collection of air pictures and sensor information, weapon assignment compatibility, training, and simulation must be coordinated to provide weapon engagement control support [66]. It has been stated that in a combat environment where situational awareness is very difficult, threats can be quickly neutralized using artificial neural networks, fuzzy logic, and genetic algorithms [67]. It has been stated that the information necessary for threat assessment must be generated and collected on a network basis [68]. Threat assessment requires the identification of proximity, capability, and intent [69]. It has been stated that the rapid development of technology has led to new threats entering the war environment, that traditional methods attempted with hu-man capacity will be insufficient, and that it is important to establish a network-centric artificial intelligence-supported decision support system [70]. In addition, there are studies in different fields conducted with image-based artificial neural networks [71]. It is also possible to establish a diagnostic model through the automatic evaluation of radar image traces in threat assessment. Countries' perspectives on threats vary due to their geographical locations. While the US and China have global defense strategies based on big data and artificial intelligence, Türkiye and South Africa focus on operational speed and human-centered solutions, converting uncertain threat data into mathematical models using mobile systems and fuzzy logic. Germany and Sweden prepare for crisis situations with scenario-based training. Sweden prepares for the worst-case scenario by analyzing the maneuverability of the threat, Germany focuses on modular systems to minimize potential damage from threats, and South Africa focuses on data visualization to facilitate decision-making under stress. China aims to control the distributed structure with cloud-based systems and develops countermeasures against cyberattacks [2]. Some countries' perspectives on threats are summarized in Table 2. Table 2. Perspectives on threats <table><tr><td>Country</td><td>Mail approach</td><td>Priority capabilities</td><td>Technological focus</td></tr><tr><td>USA</td><td>Multi-layered defense and proactive deterrence.</td><td>Speed, range, stealth technology, cyber integration.</td><td>THAAD, Patriot systems, artificial intelligence.</td></tr><tr><td>Rusia</td><td>Hybrid warfare strategies and hypersonic missiles.</td><td>Hypersonic speed, electronic warfare, psychological impact.</td><td>S-400, S-500, Kinzhal hypersonic missile.</td></tr><tr><td>Israel</td><td>Rapid response and high-accuracy defense.</td><td>Missile defense (Iron Dome), friend-or-foe identification, real-time data processing.</td><td>Iron Dome, Arrow missile system, artificial neural networks.</td></tr><tr><td>China</td><td>Asymmetric capabilities and space-based surveillance.</td><td>Long-range, anti-satellite weapons, unmanned aerial vehicles.</td><td>HQ-9, DF-21D, quantum radar.</td></tr><tr><td>Türkiye</td><td>Domestic defense industry and multi-purpose defense networks.</td><td>Air defense missile systems (HISAR-SIPER), UAV drone technology, logistical flexibility.</td><td>HISAR-SIPER missile system, TB2, Anka3, Akinci, Aksungur, Kılızilelma.</td></tr><tr><td>France</td><td>NATO integration and nuclear deterrence.</td><td>Nuclear-tipped missiles, air superiority aircraft.</td><td>Rafale aircraft, Aster missile system.</td></tr><tr><td>North Korea</td><td>Weapons of mass destruction and psychological pressure.</td><td>Nuclear capacity, long-range missiles, low-cost unmanned vehicles.</td><td>Hwasong missile series, drone swarms.</td></tr></table> When evaluating studies conducted in different countries within the scope of threat assessment, it has been noted that these studies require high costs, it is difficult to ensure consistency, sufficient data is not available, sufficient parameters cannot be selected, they may not be valid for a long period of time, the level of uncertainty is high, assumptions may not be applicable, there is a need for retraining, and the level of expert knowledge is limited. requiring a sufficient amount of simulation or test data beforehand, the possibility of some parameters being overlooked, and the system's potential sensitivity to interference [2]. Thirty-eight different criteria used in threat assessment have been identified in fifty-six different studies. The most frequently used criteria include distance from air defense systems, speed, altitude, direction, and target type. National perspectives also influence assessment priorities. While the US and China focus on big data and artificial intelligence, countries such as Türkiye and Israel emphasize operational speed and precision defense. In studies conducted with artificial neural networks, very few criteria are taken into account, training rates between $60\%$ and $95\%$ yield good results, the learning rate between $10^{-2}$ and $10^{-4}$ , the mean squared error method is generally used, the number of hidden layers used as parameters varies between 1 and 50, different functions are used, and the number of iterations ranges from 20 to 5000. In single-layer artificial neural networks, it has been stated that the number of layers in the layer must be one more than twice the number of inputs [72]. The lack of a universally adaptable model that can seamlessly integrate different methods and quickly adapt to the dynamic nature of air threats remains an ongoing challenge. # 3. Methodology # 3.1. Data collection and criteria selection Twenty-six different criteria were identified that are most accessible, most frequently used in the literature, and allow for data imputation. The 223 target data points were compiled from the results tables of 56 studies shared in the literature. The frequency of use of the criteria used in the 56 different studies, their level of importance, the number of data points obtained from the literature, and the numbers of the criteria associated with the criteria obtained from expert opinions are shown in Table 3. Table 3. Perspectives on threats (continued on next page) <table><tr><td>Order</td><td>Criteria</td><td>Usage count</td><td>Importance level</td><td>Number of data points obtained</td><td>Related criteria</td></tr><tr><td>1</td><td>Distance to Air Defense System</td><td>49</td><td>0,133152</td><td>198</td><td>10-15-21</td></tr><tr><td>2</td><td>Speed</td><td>48</td><td>0,130435</td><td>218</td><td>5-6-13-18-21</td></tr><tr><td>3</td><td>Altitude</td><td>38</td><td>0,103261</td><td>208</td><td>5-6-18</td></tr><tr><td>4</td><td>Direction / dive angle</td><td>29</td><td>0,078804</td><td>116</td><td>10-18-21</td></tr><tr><td>5</td><td>Target type</td><td>27</td><td>0,073370</td><td>160</td><td>2-3-6-7-8-9-11-17-18-26</td></tr><tr><td>6</td><td>Flutter maneuver rate / number - climb rate / altitude change</td><td>19</td><td>0,051630</td><td>80</td><td>2-3-5</td></tr><tr><td>7</td><td>Damage capacity / mission type / combat capability / strike effectiveness</td><td>19</td><td>0,051630</td><td>20</td><td>5-11</td></tr><tr><td>8</td><td>Jamming capability</td><td>17</td><td>0,046196</td><td>126</td><td>5</td></tr><tr><td>9</td><td>Iff status</td><td>17</td><td>0,046196</td><td>71</td><td>5-13</td></tr><tr><td>10</td><td>Defended Element/Distance to Closest Approach Point to Air Defense System/Confrontation Status</td><td>13</td><td>0,035326</td><td>57</td><td>1-4</td></tr><tr><td>11</td><td>Type/Weight of Munitions carried by target</td><td>10</td><td>0,027174</td><td>25</td><td>5-7-17-21-26</td></tr><tr><td>12</td><td>Flight plan information - route status</td><td>8</td><td>0,021739</td><td>20</td><td>13</td></tr><tr><td>13</td><td>Intent</td><td>8</td><td>0,021739</td><td>26</td><td>2-9-12-19-20-22-24</td></tr><tr><td>14</td><td>Friendly Element Support / Engagement Status with Threat / Distance to Friendly Element / Within Range Status</td><td>8</td><td>0,021739</td><td>10</td><td>-</td></tr><tr><td>15</td><td>Engagement rule - political climate / within weapons envelope / within restricted area</td><td>8</td><td>0,021739</td><td>10</td><td>1</td></tr><tr><td>16</td><td>Threat uncertainty level/importance</td><td>8</td><td>0,021739</td><td>15</td><td>All</td></tr><tr><td>17</td><td>Target's weapon engagement distance</td><td>6</td><td>0,016304</td><td>28</td><td>5-11</td></tr><tr><td>18</td><td>Radar cross section</td><td>6</td><td>0,016304</td><td>49</td><td>2-3-4-5</td></tr><tr><td>19</td><td>Multiple Target Status/Target Protection Status/Number of Targets/Strike Size</td><td>6</td><td>0,016304</td><td>10</td><td>13</td></tr><tr><td>20</td><td>Target's fire control radar status</td><td>5</td><td>0,013587</td><td>20</td><td>13-24</td></tr><tr><td>21</td><td>Time Required to Hit Target / Target Arrival Time</td><td>5</td><td>0,013587</td><td>20</td><td>1-2-4-11</td></tr><tr><td>22</td><td>Probable Direction of Attack by Country / Approach Direction Status</td><td>4</td><td>0,010870</td><td>20</td><td>13</td></tr><tr><td>23</td><td>Weather conditions - visibility status</td><td>4</td><td>0,010870</td><td>10</td><td>-</td></tr><tr><td>24</td><td>Missile launch status</td><td>3</td><td>0,008152</td><td>10</td><td>13-20</td></tr><tr><td>25</td><td>Target airborne time</td><td>2</td><td>0,005435</td><td>5</td><td>-</td></tr><tr><td>26</td><td>Target maximum range</td><td>1</td><td>0,002717</td><td>20</td><td>5-11</td></tr></table> When selecting the criteria to be used in the model, data determining the threat scores obtained based on the criteria calculated in the studies in the literature were taken into account. The data shared in the studies were compiled and used for training, testing, and validation data. Criteria with no data were not considered. A total of 5,798 data points were compiled for 26 criteria for 223 different target situations. 1,552 data points were readily available, and the remaining 4,246 data points were created through data completion processes. A total of 223 output data points were also collected. Data and results from studies on threat assessment in the literature were compiled and used as training and test data. This aimed to show an average truth based on results obtained from different countries' perspectives. # 3.2. Data preprocessing When collecting data, numerical values were standardized by converting them to the same type (e.g., km/h was converted to m/s). "Unknown" was added to categorical data. Twenty different target types were identified to standardize target type data, and thirty-eight experts were asked to prioritize engagement with twenty different targets simultaneously. The results revealed differing perspectives due to significant standard deviations in target prioritization. The average and median of the experts' target rankings were determined, along with the weight rankings of the targets. Mode values were not considered as they were not meaningful. After all criterion data were collected and standardized, the criterion risk direction and the maximum and minimum value ranges of the criteria were determined. The number of criteria obtained for each target was determined. Data completion has been performed according to priority and the relationship between criteria. While considering similar situations, the completion process was performed by looking at the most important and most da-ta-rich criteria in order. In the completion process, the average, median, or mode value was taken according to the status of the criterion for those with similar threat scores. A completion process specific to each criterion was performed in the completion of missing values. For each missing criterion data point, data was obtained using the nearest cluster-based estimation method based on clusters created using the related criteria in the relationship network. The algorithm steps are outlined below. Step 1: Identify the criterion with the highest number of available data points. Step 2: Sort the criteria linked to this criterion by order of importance. Step 3: Filter out targets with missing data in the relevant criterion. Step 4: Filter the linked criteria with the highest importance level. If there is no data, move on to the next criterion in order of importance. If there is no data in any of the linked criteria, return to Step 1 and move on to the next criterion. Step 5: Filter the linked criterion value and surrounding data for all targets. Step 6: If the criterion information is quantitative, take the average; if it is qualitative, take the median value. (Since the repetition of the same values is not very likely in such problems, the mode value is not meaningful.) Step 7: Complete the missing data with this value. Step 8: Return to Step 1 and move on to the other criterion. Continue until all table data is complete. The normalization process was performed using the following formula. The x criterion value denotes the minimum value for the relevant criterion column, while max (x) denotes the maximum value for the relevant criterion value. Minimum-Maximum Method; when the increase in risk and direction are linear; $$ z = \frac {x - m i n (x)}{m a x (x) - m i n (x)} \tag {1} $$ When the increase in risk and direction are not linear; $$ z = \frac {m i n (x) - x}{m i n (x) - m a x (x)} \tag {2} $$ One of the criteria, the threat uncertainty level, is calculated separately for each target after normalization based on the number of criteria for which information is available. Criteria for which information is unavailable increase the uncertainty level, and the uncertainty level increases the risk. The closer it is to 1, the greater the uncertainty and risk. This criterion is included in the threat assessment so that targets with low visibility and very little information obtained are not overlooked. The normalization values were converted to a 1-10 scale using the $\mathrm{x}^{*}9 + 1$ transformation so that formula (3) below could be applied. The threat clarity level was calculated using this equation based on these values. It was calculated using the following formula based on how many data points were obtained without imputing missing data from twenty-five criteria (s of 25). $C_{ij}\epsilon [1,10]$ represents the j. criterion value of the i. target [73]. $$ \text {T h r e a t C l a r i t y L e v e l} = \mathrm {C} _ {\mathrm {i}} = \frac {\ln \left(\prod_ {\mathrm {j} = 1} ^ {\mathrm {s}} \llbracket \mathrm {C} _ {\mathrm {i j}} \rangle + 1 \rrbracket\right)}{\ln \left(1 0 ^ {\mathrm {s}}\right) + 1} = \frac {\sum_ {\mathrm {j} = 1} ^ {\mathrm {s}} \ln \left(\mathrm {C} _ {\mathrm {i j}}\right) + 1}{\ln \left(1 0 ^ {\mathrm {s}}\right) + 1} \tag {3} $$ $$ \text {T h r e a t U n c e r t a i n t y L e v e l} = 1 - \mathrm {C} _ {\mathrm {i}} \tag {4} $$ The Threat Uncertainty Level causing the increase in risk was determined by the conversion. # 3.3. Threat score calculation After all criteria values were determined, the weighted total threat score $(\mathbf{v}^{\wedge})$ was calculated based on the importance levels of the criteria. Since the average absolute error value (AAE) between the weighted total threat score and the threat scores compiled from the literature $(\mathbf{vj})$ is close to zero, there is similarity. $$ A A E = \frac {\Sigma (\mathrm {V j} - \mathrm {V} ^ {\wedge})}{\mathrm {n}} = 0, 0 3 9 7 7 1 4 1 \tag {5} $$ The relationship between the difference between the threat scores compiled from the literature at the outset and the weighted threat score obtained according to the importance levels of the criteria and the number of data obtained for each threat group is shown in Figure 1. As the number of data obtained increases, the difference tends to increase positively. However, since this difference is not a significant difference, the missing data completion process has yielded effective results. Figure 1 shows that even after the missing data imputation process, the difference between the original threat scores in the literature and our calculated weighted scores is quite low (Mean Absolute Error $= 0.039$ ). This indicates that our imputation method produces reliable results without compromising data integrity. The slight upward trend suggests that our model offers a slightly different (and possibly more accurate) perspective than the literature for targets with more comprehensive data. Figure 1. Difference between literature threat score and weighted threat score To standardize the consistency between threat scores obtained from the literature and eliminate the positive effect of the new data table obtained through missing data imputation, the following process was performed to obtain a combined geometric threat score. The aim here is to reduce the tendency for change resulting from the missing data imputation process by bringing the threat score closer to the smaller value where the difference is large. $$ \text {C o m b i n e d G e o m e t r i c T h r e a t S c o r e (C G T S)} = \sqrt {\mathrm {v j} * \mathrm {v} ^ {\wedge}} \tag {6} $$ A Combined Geometric Threat Score has been proposed for use as training data in artificial neural networks. This reduces the impact of biased assessments that may arise from different sources. # 3.4. Artificial neural network model # 3.4.1. Layer structure The basic architecture of the model consists of three layers; Input layer: Normalized values of twenty-six criteria Output layer: CGTS values Hidden layer: The number of neurons is kept variable as a parameter. The neuron outputs in the hidden layer are defined as follows. In equation (7), $W_{1}$ is the weight matrix, $b_{1}$ is the bias vector (which shifts the activation threshold of the neuron), and $x$ is the input vector. The ReLU activation function introduces nonlinearity into the model. $h^{(1)}$ denotes the output of the first hidden layer. $$ h ^ {(1)} = \operatorname {R e L U} \left(W _ {1} * x + b _ {1}\right) \tag {7} $$ In equation (7), $W_{1}$ is the weight matrix and $b_{1}$ is the bias vector. The output layer is calculated as a linear combination. In equation (8), $h^{(2)}$ is the output of the second hidden layer. $W_{3}$ is the weight matrix of the output layer, $b_{3}$ is the bias vector of the output layer, and $y_{predict}$ is the result produced by the model. For the output layer; $$ y _ {\text {p r e d i c t}} = W _ {3} * h ^ {(2)} + b _ {3} \tag {8} $$ # 3.4.2. Activation function and learning algorithm The Levenberg-Marquardt backpropagation algorithm was used to train the model. It is a hybrid algorithm that minimizes the error function using a combination of least squares, gradient descent, and Gauss-Newton methods. In equation (9), $e$ is the error vector. In equation (10), $w_{k}$ is the weight vector at the $k$ 'th iteration, $J$ is the Jacobian matrix (the derivative of the error terms with respect to the weights), $I$ is the identity matrix, and $\mu$ is the damping coefficient. $$ e = y _ {i} - f \left(x _ {i}, w\right) \tag {9} $$ $$ w _ {k + 1} = w _ {k} - \left[ J ^ {T} J + \mu I \right] ^ {- 1} J ^ {T} * e \tag {10} $$ How the algorithm works: - Initially, $\mu$ starts with a small value (e.g., 0.001). - If the error decreases in the new iteration, $\mu$ is reduced, the algorithm behaves like Gauss-Newton (fast convergence). - If the error increases, $\mu$ is increased, the algorithm behaves like gradient descent (more stable steps). - In this way, $\mu$ is dynamically adjusted to reach the optimum point. # 3.4.3. Performance function Mean Square Error (MSE) was used for model performance. N is the total sample size. MSE is minimized throughout the training process. The $y_{real}^{(i)}$ value is the obtained CGTS value. The $f(x_i, w)$ value indicates the predicted value. $$ E (w) = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ y _ {\text {r e a l}} ^ {(i)} - f \left(x _ {i}, w\right) \right] ^ {2} \tag {11} $$ N is the total sample size. MSE is minimized throughout the training process. # 3.4.4. Training and testing processes Data Split: Training, validation, and test data ratios were tested in different combinations. The training ratio was set to $70\%$ as recommended in the literature. Data was split randomly according to the specified ratios. Batch Processing: Data was included in the training process in batches. Number of Epochs: Early stopping was applied when training began to show overfitting to prevent overfitting. Regularization Techniques: Dropout: Specific neurons in the layers were randomly disabled. L2 Regularization: $L_{total}$ is the total loss value, the main error function. It includes both the model's error amount (L) and the regularization penalty shown in Equation (12). Mean squared error (MSE) is preferred for regression, while Cross Entropy Loss is preferred for classification. It measures the difference between the model's prediction and the actual value. $\lambda$ is the regularization coefficient. It determines how much the weights are penalized. If $\lambda$ is large, it reduces overfitting more, but the model becomes simpler. If $\lambda$ is small, the regularization effect decreases. $\sum \| W\|^2$ is the sum of the squares of all weights. The goal here is to make the model simpler and more generalizable by penalizing large weights. $$ L _ {t o t a l} = L + \lambda \sum \| W \| ^ {2} \tag {12} $$ Model performance was measured using test data not used in training, and CGTS values were compared with predicted values. # 3.4.5. Threat assessment model flowchart Step 1: Collecting sensor data in a standardized format Step 2: Completing missing data and verifying information from different sensors, considering the criterion relationship network, starting with the criterion that yields the most data Step 3: Performing normalization for twenty-five criteria Step 4: Calculating the threat uncertainty level, which is the twenty-sixth criterion Step 5: Obtaining results using the trained ANN model # 4. Simulation and empirical results The best results were obtained when the training rate was $70\%$ , the validation rate was $10\%$ , and the test rate was $20\%$ . The mean square error ranged from 0.0005 to 0.0072, and the correlation coefficient (R) was 0.9617. These results show that the model predicts threat scores with high accuracy. # 4.1. Training and test results The dataset consists of two hundred twenty-three target sets. Each is a vector with twenty-six inputs. Figure 2 shows the training process of the proposed ANN architecture. In the first stage, a set of 223 targets with 26 inputs enters the 10-layer training process. A single result, the threat score, is obtained in the output layer. Figure 2. ANN Training Model As shown in Figure 2, the ANN model we propose consists of an input layer with 26 neurons (representing normalized criteria), a hidden layer (with an optimal number of neurons set to 10), and an output layer that produces a single CGTS value. This architecture effectively processes multidimensional input data to learn complex, nonlinear relationships for threat assessment. The results of the hidden layer number variation in the most successful cases within the training, validation, and testing results are shown in Table 4. Table 4. Training and test results <table><tr><td>Data distribution (Training / Validation / Test)</td><td>Hidden layer</td><td>MSE (Test)</td><td>R (Test)</td><td>Epochs</td></tr><tr><td>70 / 10 / 20</td><td>10</td><td>0.0025</td><td>0.9617</td><td>9</td></tr><tr><td>70 / 10 / 20</td><td>10</td><td>0.0027</td><td>0.9461</td><td>9</td></tr><tr><td>70 / 10 / 20</td><td>5</td><td>0.0060</td><td>0.8883</td><td>12</td></tr><tr><td>70 / 10 / 20</td><td>15</td><td>0.0049</td><td>0.9138</td><td>9</td></tr></table> The results show that the ten-layer hidden model achieved an excellent balance between accuracy and efficiency on the test set with a high regression value $(\mathrm{R} > 0.9617)$ and low error $(\mathrm{MSE} < 0.003)$ , and effectively generalized the data. Because the 10-layer hidden model provides an optimal balance between complexity and generalization, capturing the nonlinear features in the data while preventing overfitting. Performance results related to training, testing, error, and correlation are shown in Figure 3. Figure 3. Performance Results Figure 3. Performance Results (continued) The performance results demonstrate that the model's training process is sound and successful. The graph in the upper left shows that all training, validation, and test errors have decreased and converged smoothly, indicating no significant overfitting. The error histogram (upper right) confirms the model's high accuracy by showing that most prediction errors cluster around zero. The regression plots for training, testing, and validation (below) show R values very close to 1 $(0.96+)$ . This strong linear relationship between the predicted and actual CGTS values across all data subsets confirms the model's generalization ability. # 4.2. Comparative analysis A comparison between this study and studies conducted with artificial neural networks in the literature is shown in Table 5. Table 5. Comparison with studies in the literature <table><tr><td>Studies</td><td>Model fitting</td><td>Number of data</td><td>Epoch number</td><td>Number of criteria</td></tr><tr><td>Reference [10]</td><td>-</td><td>-</td><td>-</td><td>3</td></tr><tr><td>Reference [6]</td><td>-</td><td>-</td><td>20</td><td>-</td></tr><tr><td>Reference [17]</td><td>-</td><td>1500</td><td>100</td><td>4</td></tr><tr><td>Reference [20]</td><td>0,0003</td><td>140</td><td>2100</td><td>4</td></tr><tr><td>Reference [55]</td><td>-</td><td>-</td><td>30 - 100 - 500 - 800</td><td>-</td></tr><tr><td>Reference [44]</td><td>0,0100</td><td>60</td><td>148</td><td>8</td></tr><tr><td>Reference [39]</td><td>-</td><td>100</td><td>30</td><td>7</td></tr><tr><td>Reference [24]</td><td>0,0010</td><td>336</td><td>30</td><td>5</td></tr><tr><td>Reference [21]</td><td>0,0100</td><td>75</td><td>-</td><td>4</td></tr><tr><td>Reference [38]</td><td>-</td><td>4000</td><td>20</td><td>-</td></tr><tr><td>Reference [74]</td><td>0,0010</td><td>100</td><td>150</td><td>-</td></tr><tr><td>Reference [63]</td><td>0,0010</td><td>600</td><td>5000</td><td>55</td></tr><tr><td>This study</td><td>0,0025</td><td>223</td><td>9</td><td>26</td></tr></table> Compared to studies in the literature, more criteria were considered than in other studies. The success rate of stopping showed faster results than other studies due to the low number of epochs. Although the cell data was $223^{*}26$ , which was more than others, it yielded effective results in a shorter time. This allows threat assessment to be performed in shorter periods according to changes in real-time threat data. The model validation yielded similar results. The model's efficiency is directly proportional to the number of data points and criteria and inversely proportional to the number of epochs and the model validation rate. Accordingly, efficiency was calculated using the following equation (13) and results are shown in Table 6. $$ E f f i c i e n c y = \frac {\text {N u m b e r o f D a t a} * \text {N u m b e r o f C r i t e r i a}}{\text {E p o c h N u m b e r} * \text {M o d e l F i t t i n g}} \tag {13} $$ Table 6. Efficiency <table><tr><td>Studies</td><td>Efficiency</td></tr><tr><td>Reference [10]</td><td>-</td></tr><tr><td>Reference [6]</td><td>-</td></tr><tr><td>Reference [17]</td><td>-</td></tr><tr><td>Reference [20]</td><td>889</td></tr><tr><td>Reference [55]</td><td>-</td></tr><tr><td>Reference [44]</td><td>324</td></tr><tr><td>Reference [39]</td><td>-</td></tr><tr><td>Reference [24]</td><td>56000</td></tr><tr><td>Reference [21]</td><td>-</td></tr><tr><td>Reference [38]</td><td>-</td></tr><tr><td>Reference [74]</td><td>-</td></tr><tr><td>Reference [62]</td><td>6600</td></tr><tr><td>This study</td><td>26577</td></tr></table> This metric is presented as an indicator of how much data complexity the model can process per unit learning cycle (epoch) and demonstrates that our work achieves higher learning efficiency compared to others. When the same initial data is used to obtain the output with the artificial intelligence training model, the difference between the Combined Geometric Threat Score (CGTS) used in the training model and the Artificial Intelligence Threat Score has become quite close. The difference trend depending on the number of initial data has been reduced. Thus, the quality of the training and test data has been improved, and the training model output has been ensured to provide very good results. This is shown in Figure 4. Figure 4: Difference between the Combined Geometric Threat Score and the Artificial Intelligence Threat Score Figure 4 shows that the difference between the outputs of our trained ANN model (Artificial Intelligence Threat Score) and the target values used in training (Combined Geometric Threat Score - CGTS) is close to zero. This is the clearest evidence that our model has learned and can predict the CGTS with high accuracy. Furthermore, the difference trend observed in Figure 1, which was dependent on the initial number of data points, has been eliminated thanks to the trained model. The artificial neural network training model has yielded results with a high accuracy rate. By providing an average view of threats, the impact varying by geography has been reduced. The criterion relationship network has ensured that the missing data completion process is performed correctly. In future studies, threat-based assignment can enable air defense systems to engage automatically. The contribution of this study to the literature is the dynamic updating of the threat score using an artificial neural network and its integration with the missing data completion algorithm. # 4.3. Simulation scenario The training model was tested for five different targets, and sample simulation results are shown in Table 7. Missing values were completed using data imputation methods to obtain the threat score. Table 7 presents an example of our model's performance for five different targets. For example, Target 3 (Bomber Aircraft) has been assessed as having the highest threat score (0,7303) due to its high ammunition capacity (8460) and low engagement distance (10). In contrast, Target 4 (SEAD Aircraft), despite its high electronic warfare capability, received a relatively lower threat score (0,5904) due to its higher engagement distance (28). These results demonstrate that our model can perform realistic and interpretable threat prioritization by balancing different criteria. The threat scores obtained can be used to prioritize automatic engagement by air defense systems. Table 7. Simulation results <table><tr><td>Order</td><td>Criteria</td><td>Target 1</td><td>Target 2</td><td>Target 3</td><td>Target 4</td><td>Target 5</td></tr><tr><td>1</td><td>Distance to Air Defense System</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>2</td><td>Speed</td><td>315,9</td><td>275,4</td><td>267,3</td><td>305,1</td><td>280,8</td></tr><tr><td>3</td><td>Altitude</td><td>12500</td><td>11000</td><td>9500</td><td>10000</td><td>9500</td></tr><tr><td>4</td><td>Direction / dive angle</td><td>3,7912</td><td>24,024</td><td>0</td><td>24,8864</td><td>17,1136</td></tr><tr><td>5</td><td>Target type</td><td>Air</td><td>Air</td><td>Bomber</td><td>SEAD</td><td>SEAD</td></tr><tr><td></td><td></td><td>Defence</td><td>Defence</td><td>Aircraft</td><td>Aircraft</td><td>Aircraft</td></tr><tr><td></td><td></td><td>Fighter</td><td>Fighter</td><td></td><td></td><td></td></tr><tr><td></td><td></td><td>Aircraft</td><td>Aircraft</td><td></td><td></td><td></td></tr><tr><td>6</td><td>Flutter maneuver rate / number - climb rate / altitude change</td><td>1,09</td><td>1,15</td><td>0,68</td><td>1,09</td><td>1,09</td></tr><tr><td>7</td><td>Damage capacity / mission type / combat capability / strike effectiveness</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>8</td><td>Jamming capability</td><td>Negative</td><td>Negative</td><td>Positive (Low)</td><td>Positive (High)</td><td>Positive (High)</td></tr><tr><td>9</td><td>Iff status</td><td>Foe</td><td>Foe</td><td>Foe</td><td>Foe</td><td>Foe</td></tr><tr><td>10</td><td>Defended Element/Distance to Closest Approach Point to Air Defense System/Confrontation Status</td><td>900</td><td>1816</td><td>8460</td><td>900</td><td>900</td></tr><tr><td>11</td><td>Type/Weight of Munitions Carried by Target</td><td>None</td><td>None</td><td>None</td><td>None</td><td>None</td></tr><tr><td>12</td><td>Flight plan information - route status</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>13</td><td>Intent</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>14</td><td>Friendly Element Support / Engagement Status with Threat / Distance to Friendly Element / Within Range Status</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>15</td><td>Engagement rule - political climate / within weapons envelope / within restricted area</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>16</td><td>Threat uncertainty level/importance</td><td>0,6109</td><td>0,6037</td><td>0,5741</td><td>0,6098</td><td>0,5711</td></tr><tr><td>17</td><td>Target's weapon engagement distance</td><td>15</td><td>15</td><td>10</td><td>28</td><td>28</td></tr><tr><td>18</td><td>Radar cross section</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>19</td><td>Multiple Target Status/Target Protection Status/Number of Targets/Strike Size</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>20</td><td>Target's fire control radar status</td><td>Active</td><td>Active</td><td>Active</td><td>Inactive</td><td>Active</td></tr><tr><td>21</td><td>Time Required to Hit Target / Target Arrival Time</td><td>1</td><td>15</td><td>1</td><td>5</td><td>1</td></tr><tr><td>22</td><td>Probable Direction of Attack by Country / Approach Direction Status</td><td>1 (%30)</td><td>2 (%50)</td><td>1 (%30)</td><td>3 (%80)</td><td>3 (%80)</td></tr><tr><td>23</td><td>Weather conditions - visibility status</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>24</td><td>Missile launch status</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>25</td><td>Target airborne time</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>26</td><td>Target maximum range</td><td>3000</td><td>2900</td><td>1100</td><td>3000</td><td>3000</td></tr><tr><td></td><td>Threat Score</td><td>0,6380</td><td>0,6318</td><td>0,7303</td><td>0,5904</td><td>0,6279</td></tr></table> # 5. Conclusions The results obtained show similar error rates when compared to those reported in studies in the literature. Furthermore, results can be achieved with fewer iterations based on the amount of data. This proves that the model's missing data completion and dynamic learning features are effective. Furthermore, recalculating the threat score using the geometric mean method has increased the model's generalization ability. Thus, a high-quality training set has been created. This approach has successfully reduced the "human factor" variability mentioned in studies such as [1] and [62] and provided a consistent and automatic basis for evaluation. The model has been trained using simulated and literature-derived data; its response to noisy and incomplete data in a real-time combat environment has not yet been fully tested. From a practical perspective, real-time analysis, error reduction, and scalability are the model's strengths. However, due to the black-box nature of artificial neural networks, data dependency that changes with new technologies and vulnerability to cyberattacks should be addressed as weaknesses in operational applications. Most existing studies are static in nature and cannot adapt to changing threat conditions. Therefore, the integration of artificial neural networks with dynamic learning capabilities into the threat assessment process fills an important gap in the literature. There is a significant gap in the development of both dynamic and adaptive models. Many systems rely on static rules or require extensive datasets that are often confidential for training, which limits their practical applications and adaptability to new threat profiles. This study addresses this gap by proposing a hybrid methodology that synthesizes historical information in the literature with the adaptive learning capabilities of ANNs. The main contributions of this article are as follows: The criteria used in fifty-six different studies in the literature were compiled and standardized, taking into account frequency of use, and data standardization was achieved. Depending on the connection between the criteria examined for threats, the relationship network was revealed for the first time, and missing data completion processes related to the relationship network were performed. Thus, a comprehensive threat assessment data set consisting of 26 criteria was created. The missing data completion algorithm solves the problem of combining and making usable the limited and scattered data sets in the literature. An innovative approach to calculating the Combined Geometric Threat Score has been proposed to generate reliable training, testing, and validation data. CGTS neutralizes biased threat scores from different studies, creating a more reliable target variable. An ANN model that accurately predicts threat scores and demonstrates high performance and generalizability in different validation scenarios has been designed and validated $(R = 0,96)$ . The proposed model aims to provide fundamental validation for next-generation, network-centric air defense systems capable of real-time, intelligent threat assessment. This study has developed a dynamic and learning artificial neural network model for the threat assessment process in air defense systems. The model successfully predicts threat scores by integrating data from different sources. The contribution of this study to the literature is the dynamic updating of the threat score using an artificial neural network, its integration w