Shallow Artificial Neural Networks for Forecasting

Shallow Artificial Neural
Networks for Forecasting
Prof. Dr. Erol Egrioglu
Giresun University, Department of Statistics, Turkey
(Visiting Researcher for CMAF)
Can shallow ANNs produce satisfactory forecasting
performance for forecast competition data sets?
Which automatic forecasting strategy can be preferred  for
shallow ANNs?
Which pre-processing methods can be preferred in shallow
ANNs?
Is Shallow ANNs better than other ML methods?
Presentation Plan
Summarizing
Single multiplicative neuron model ANN (SMNM-ANN)
Pi-Sigma ANN (PS-ANN)
Dendritic Neuron model ANN (DN-ANN)
Introducing Automatic Forecasting Methods for SMNM-ANN, PS-
ANN and DN-ANN
Exploring Performance of SMNM-ANN, PS-ANN and DN-ANN for
competition Data Sets
Discussions
Artificial Neural Networks
Artificial Neural Networks
Deep ANNs
Shallow ANNs
Artificial Neuron Models
Mc-Culloch and Pitts (Additive)
Simple Perceptron
It cannot solve
XOR Problem
Artificial Neuron Models
Multiplicative Neuron Model
SMNM-ANN
It can solve XOR
Problem
Artificial Neuron Models
Dendritic Neuron Model
(Synaptic Function)
(Dendritic Function)
(Membrane Function)
(
Soma
 Function)
O
Three Shallow ANNs
SMNM-ANN for Forecasting
Training of SMNM-ANN
Training of SMNM-ANN
Pre
-
processing for SMNM-ANN
Model Adequacy and Input Significance for
SMNM-ANN
Model Adequacy
Input Significance
PS-ANN for Forecasting
DNM-ANN for Forecasting
Automatic Forecasting Method
s
 Based on
SMNM
-ANN, PS-ANN, DNM
-ANN
AFM-
(
SMNM
;PS;DNM)
-ANN-1: Pre-processing (differencing operations) and
fixed architecture
AFM-
 (
SMNM
;PS;DNM) 
-ANN-2: Pre-processing (differencing operations) and
architecture selection based on validation data set
AFM-
 (
SMNM
;PS;DNM) 
-ANN-3: Pre-processing (differencing operations) and
architecture selection based on input significance tests
AFM-
 (
SMNM
;PS;DNM) 
-ANN-4: Pre-processing (detrend and
deseasonalization operations) and architecture selection based on input
significance tests
AFM-SMNM-ANN-
1
Start
Seasonal
Differencing
Differencing
Train SMNM-ANN
with predetermined
architecture by PSO
Calculate
Forecasts
End
AFM-SMNM-ANN-
2
Start
Seasonal
Differencing
Differencing
A set is determined for
the candidate
architectures
Train SMNM-ANN
for candidate
architectures by PSO
Calculate Validation
Data Performance
for All Candidates
Calculate
Forecasts
End
Select the best
architecture
AFM-SMNM-ANN-3
Start
Seasonal
Differencing
Differencing
Determining lagged
variables 
(
inputs
) 
by using
significant partial
autocorrelations
Train SMNM-
ANN by PSO
Apply input
significance
tests
Are all inputs
significant?
Calculate
Forecasts
Yes
No
Exclude
insignificant
inputs
End
AFM-SMNM-ANN-
4
Start
Seasonal Adjustment
(Winters Multiplicative
ES Method)
De-trend
(Linear Regression)
Determining lagged
variables 
(
inputs
) 
by using
significant partial
autocorrelations
Train SMNM-
ANN by PSO
Apply input
significance
tests
Are all inputs
significant?
Calculate
Forecasts
Yes
No
Exclude
insignificant
inputs
End
Exploring Forecasting Performances
Computational Intelligence in Forecasting Competition 2016
M4 Competition (Weekly, Hourly, Daily Time Series)
CIF 2016
M4 Competition Hourly Data Results
Rank=12
Among 61
Methods in
M4
M4 Competition Weekly Data Results
Rank=42
M4 Competition Daily Data Results
Rank=32
Comparison with the best method
Conclusions
Strategy 1 and Strategy 3 are the best strategies.
Strategy 3 brings a statistical perspective to ANN models.
Strategy 2 is tried for CIF2016 and it is not better than Strategy 1 and 3.
Strategy 4 looks generally the worst strategy.
Performances of SMNM, PS and DNM ANNs are better than MLP and
RNN benchmarks.
Performance of the strategies are varying according to ANN type and
data type.
Pre-processing methods are important for the performance of ANNs.
SMNM can produce competitive results against the winner method.
Future Researches for SMNM, PS and DNM
Outliers and change points did not take into consideration in the
strategies. This can be increase the performance.
Ensemble strategies are useful
 in M4
 and can be considered.
Different pre-processing methods can be considered.
Global models can be obtained by using SMNM,PS and DNM.
Hybridization of statistical methods and SMNM,PS and DNM can
be useful.
References
Yadav, R.N., Kalra, P.K., & John, J. (2007). Time series prediction with single
multiplicative neuron model.  Applied Soft Computing, 7, 1157-1163.
Y. Todo, H. Tamura, K. Yamashita, and Z. Tang, “Unsupervised learnable
neuron model with nonlinear interaction on dendrites,” 
Neural
 
Netw
orks
,
vol. 60, pp. 96–103, Dec. 2014.
 
Shin, Y., Gosh, J. (1991). The Pi sigma Network: An efficient higher order
neural network for pattern classification and function approximation. In
Proceedings of the International Joint Conference on Neural Networks.
Mohammadi, S. (2018). A new
 test for the significance of neural network
inputs. Neurocomputing, 273, 304-322.
Makridakis, S. Spiliotis, E., & Assimakopoulos, V. (2020). The M4
Competition: 100,000 time series and 61 forecasting methods,
International Journal of Forecasting
, 36, 54–74.
Slide Note

The title of my presentation is «Shallow Artificial Neural Networks for Forecasting». Recent years, complicated methods like ANNs have been used for modelling because of computers power. ANNs looks powerful tools for forecasting aims in the literature. In this talk, performances of three shallow ANNs will show by using forecast competition data sets. Different strategies for automatic forecasting are compared. Results of deep and shallow ANNs are also compared. Aim of this talk is discussing following questions:

Have Shallow ANNs satisfactory forecasting performance?

Which automatic forecasting strategy can be preferred for shallow ANNs?

Which pre-processing methods can be preferred in shallow ANNs?

Which one is better Shallow or deep ANNs?

Embed
Share

Dive into the world of artificial neural networks with a focus on shallow architectures. Join Prof. Dr. Erol Egrioglu from Giresun University as he explores the applications of these networks in forecasting. Gain insights into the innovative research conducted by this visiting researcher at CMAF. Discover how these networks are revolutionizing the field of statistics and paving the way for advanced forecasting techniques. Stay informed about the latest developments in this exciting domain through this engaging content.

  • Artificial Neural Networks
  • Forecasting
  • Prof. Dr. Erol Egrioglu
  • Giresun University
  • CMAF

Uploaded on Mar 02, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Shallow Artificial Neural Networks for Forecasting Prof. Dr. Erol Egrioglu Giresun University, Department of Statistics, Turkey (Visiting Researcher for CMAF)

  2. Can shallow ANNs produce satisfactory forecasting performance for forecast competition data sets? Which automatic forecasting strategy can be preferred for shallow ANNs? Which pre-processing methods can be preferred in shallow ANNs? Is Shallow ANNs better than other ML methods?

  3. Presentation Plan Summarizing Single multiplicative neuron model ANN (SMNM-ANN) Pi-Sigma ANN (PS-ANN) Dendritic Neuron model ANN (DN-ANN) Introducing Automatic Forecasting Methods for SMNM-ANN, PS- ANN and DN-ANN Exploring Performance of SMNM-ANN, PS-ANN and DN-ANN for competition Data Sets Discussions

  4. Artificial Neural Networks Architecture Architecture Architecture Feed- forward Fully Deep Connected Partially Connected Shallow Recurrent

  5. Artificial Neural Networks Deep ANN Needs more data Use many parameters Good for Big Data (Image Processing, Global Models) Shallow ANN Can work less data Use appropriate number of parameters Good performance for appropriate number of data Simple model structure

  6. Deep ANNs MLP with hidden layers (more than one) Recurrent ANNs (LSTM, GRU) Convolutional ANNs

  7. Shallow ANNs Multilayer Perceptron (MLP) Support Vector Machines Elman Type Recurrent ANN Jordan Type Recurrent ANN Radial Basis ANN

  8. Artificial Neuron Models It cannot solve XOR Problem Mc-Culloch and Pitts (Additive) Simple Perceptron

  9. Artificial Neuron Models It can solve XOR Problem Multiplicative Neuron Model SMNM-ANN

  10. Artificial Neuron Models ?? ?1 ?2 (Synaptic Function) (Dendritic Function) O (Membrane Function) Dendritic Neuron Model (Soma Function)

  11. Three Shallow ANNs

  12. SMNM-ANN for Forecasting 1 ??= ? 1 + exp( ?=1 (???? ?+ ??)) ? 2 Argmin ??,??;?=1,2,..,? ?? ?? ?=?+1

  13. Training of SMNM-ANN Dynamic inertia weights Dynamic social and cognitive coefficients Modified PSO Every 100 iteration, new initialization for weights and biases Restarting strategy Early Stopping Rule Successive 30 failure to update the best result

  14. Training of SMNM-ANN Modified PSO Quick and stable training Avoid local Optimum Trap Derivate free method Robust Error Metrics Objective Function

  15. Pre-processing for SMNM-ANN Seasonal Differencing Differencing Normalization De-seasonalization De-trending Normalization

  16. Model Adequacy and Input Significance for SMNM-ANN ??= ? ?? 1, ,?? ? ; ?1, ?2, , ??, ?1, ?2, , ?? ?(?)= ? ?? 1= ???? ?? 1,?? 2= ???? ?? 2, ,?? ?, ,?? ?+1= ???? ?? ?+1,?? ?= ????(?? ? ? ?? ??+ ?? ??= ?0+ ?=1 Model Adequacy Input Significance

  17. Input Significance Tests Significance Level Model Adequacy Test Model x(t-1) x(t-2) x(t-3) x(t-4) Model 1 100,00% 100,00% 57,40% 99,00% 98,90% 0,01 Model 1 100,00% 100,00% 76,80% 95,50% 95,70% 0,05 Model 1 100,00% 100,00% 85,60% 91,20% 90,00% 0,10 Model 2 100,00% 100,00% 98,40% 99,30% 91,30% 0,01 Model 2 100,00% 100,00% 99,60% 95,40% 95,60% 0,05 Model 2 100,00% 100,00% 99,70% 88,40% 99,20% 0,10 Model 3 99,90% 49,80% 100,00% 98,80% 99,00% 0,01 Model 3 100,00% 72,30% 100,00% 95,20% 95,30% 0,05 Model 3 100,00% 82,80% 100,00% 89,70% 90,50% 0,10

  18. PS-ANN for Forecasting ?? ?? 1 ,?1 1 ? ?? 2 ,?1 ,?2 ?? ?= ?1 ????? ?+ ?? ,? = 1,2, ,? 1 ?=1 1 ? 1 ??= ?2( ? = ?? ? ,?1 ? 1 + exp( ?=1 ? ?=1 ?? W 1 ?1? = ? ?2? = ) 1 + exp( ?

  19. DNM-ANN for Forecasting 1 ???= ? = 1,2, ,?;? = 1,2, ,? 1?? ?+ ??? 1 1 + exp( ? ??? ? ??= ??? ;? = 1,2, ,? ?=1 1 ??= ? ?????( ?=1 1 + ??? ?? ?????

  20. Automatic Forecasting Methods Based on SMNM-ANN, PS-ANN, DNM-ANN AFM-(SMNM;PS;DNM)-ANN-1: Pre-processing (differencing operations) and fixed architecture AFM- (SMNM;PS;DNM) -ANN-2: Pre-processing (differencing operations) and architecture selection based on validation data set AFM- (SMNM;PS;DNM) -ANN-3: Pre-processing (differencing operations) and architecture selection based on input significance tests AFM- (SMNM;PS;DNM) -ANN-4: Pre-processing (detrend and deseasonalization operations) and architecture selection based on input significance tests

  21. AFM-SMNM-ANN-1 Start ? 1 2 1 + 2(???1+ ?=2 ???? ???? > 1.645 ? Seasonal Differencing Calculate Forecasts End Train SMNM-ANN with predetermined architecture by PSO Differencing

  22. AFM-SMNM-ANN-2 Start Calculate Forecasts End Seasonal Differencing Select the best architecture Calculate Validation Data Performance for All Candidates Differencing Train SMNM-ANN for candidate architectures by PSO A set is determined for the candidate architectures

  23. AFM-SMNM-ANN-3 Start Calculate Forecasts End Yes Seasonal Differencing No Exclude insignificant inputs Are all inputs significant? Apply input significance tests Differencing Determining lagged variables (inputs) by using significant partial autocorrelations Train SMNM- ANN by PSO

  24. AFM-SMNM-ANN-4 Start Calculate Forecasts End Yes Seasonal Adjustment (Winters Multiplicative ES Method) No Exclude insignificant inputs Are all inputs significant? Apply input significance tests De-trend (Linear Regression) Determining lagged variables (inputs) by using significant partial autocorrelations Train SMNM- ANN by PSO

  25. Exploring Forecasting Performances Computational Intelligence in Forecasting Competition 2016 M4 Competition (Weekly, Hourly, Daily Time Series)

  26. CIF 2016 Rank 5,98 6,60 6,67 6,77 6,92 7,02 7,03 7,32 7,48 7,65 7,66 7,75 7,86 7,86 7,96 8,02 Method BaggedETS Ensemble of LSTMs ETS FRBE MLP LSTM deseasonalized ARIMA HEM AFM-DNM-ANN-3 AFM-DNM-ANN-1 AFM-SMNM-ANN-1 AFM-SMNM-ANN-3 PB-GRNN PB-RF AFM-SMNM-ANN-2 AVG SMAPE Method 1 PB-MLP 2 LSTM 3 AFM-PS-ANN-1 4 AFM-PS-ANN-3 5 FCDNN 6 AFM-DNM-ANN-4 7 Random Walk 8 MTSFA 9 AFM-PS-ANN-4 10 Fuzzy c-regression m 11 TSFIS 12 Theta 13 AFM-SMNM-ANN-4 14 HFM 15 MSAKAF 16 CORN SMAPE Rank 8,05 8,20 8,48 8,70 8,71 8,89 9,14 9,69 9,70 10,04 10,18 11,01 11,85 11,89 14,24 19,86 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

  27. Method Weekly 9,538 9,668 10,450 11,031 14,858 9,267 9,228 15,151 23,445 15,220 21,349 Daily 3,246 3,204 3,467 3,457 7,864 3,201 3,212 8,210 13,738 5,964 9,321 Hourly 12,979 17,200 17,817 17,422 12,741 23,038 25,770 16,784 13,333 14,698 13,842 Mean 8,587 10,024 10,578 10,637 11,821 11,835 12,736 13,382 16,839 11,961 14,837 AFM-SMNM-ANN-3 AFM-SMNM-ANN-1 AFM-DNM-ANN-1 AFM-DNM-ANN-3 AFM-SMNM-ANN-4 AFM-PS-ANN-3 AFM-PS-ANN-1 AFM-PS-ANN-4 AFM-DNM-ANN-4 RNN MLP

  28. M4 Competition Hourly Data Results Team Members Hourly Rank Team Members Hourly Rank 9,33 9,93 11,34 11,51 12,63 12,74 12,98 13,09 13,73 13,84 13,85 14,01 14,70 15,66 1 Pe ka, P. 2 AFM-PS-ANN-4 3 AFM-SMNM-ANN-1 4 AFM-DNM-ANN-3 5 AFM-DNM-ANN-1 6 Bandara, K. et. all. 7 Alves Santos Junior, J. G. 8 AFM-PS-ANN-3 9 AFM-PS-ANN-1 10 Dudek, G. 11 Mukhopadhyay, S. 12 13 14 16,56 16,78 17,20 17,42 17,82 18,60 21,14 23,04 25,77 28,54 42,56 15 16 17 18 19 20 21 22 23 24 25 Smyl, S. Jaganathan, S. et. all. Nikzad, A. Montero-Manso, P. et. all. Bontempi, G. AFM-SMNM-ANN-4 AFM-SMNM-ANN-3 Viole, F. & Vinod, H. AFM-DNM-ANN-4 MLP Tartu M4 seminar Trotta, B. RNN Kyriakides, I. & Artusi, A.

  29. Team Members AFM-SMNM-ANN-4 AFM-SMNM-ANN-3 AFM-DNM-ANN-4 MLP sNaive RNN AFM-PS-ANN-4 AFM-SMNM-ANN-1 AFM-DNM-ANN-3 AFM-DNM-ANN-1 SES Theta Naive2 Damped Com AFM-PS-ANN-3 AFM-PS-ANN-1 Holt Naive Hourly Rank 12,74 12,98 13,33 13,84 13,91 14,70 16,78 17,20 17,42 17,82 18,09 18,14 18,38 19,27 22,05 23,04 25,77 29,25 43,00 1 2 3 4 5 6 7 8 9 Rank=12 Among 61 Methods in M4 10 11 12 13 14 15 16 17 18 19

  30. Team Members Pawlikowski, M. et. all. Montero-Manso, P.et all. Doornik, J., Hendry, D. & Castle, J. Smyl, S. Fiorucci, J. A. & Louzada, F. Shaub, D. Jaganathan, S. & Prakash, P. Naive2 Pedregal, D.J. et all. Legaki N. Z. & Koutsouri K. Petropoulos, F. & Svetunkov, I. ETS SMNM-3 SMNM-4 DNM-1 PS-1 DNM-3 DNM-4 SMNM-1 PS-4 Comb PS-3 Hourly 2,9033 2,9155 3,0219 3,2052 3,2426 3,2952 3,3699 3,3900 3,4489 3,4545 3,5194 5,7342 5,7429 5,9202 6,0871 6,0871 6,4998 6,9238 7,6056 8,7249 8,7996 11,0353 Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

  31. M4 Competition Weekly Data Results Team Members Weekly Rank Team Members Weekly Rank 6,814 7,301 7,309 7,625 7,817 8,439 8,513 8,932 9,228 9,267 9,500 9,538 9,6682 9,681 1 Kyriakides, I. & Artusi, A. 2 AFM-DNM-ANN-1 3 AFM-DNM-ANN-3 4 Viole, F. & Vinod, H. 5 AFM-SMNM-ANN-4 6 AFM-PS-ANN-4 7 RNN 8 Mukhopadhyay, S. 9 MLP 10 AFM-DNM-ANN-4 11 Dudek, G. 12 13 14 10,271 10,4499 11,031 13,349 14,858 15,151 15,220 15,352 21,349 23,445 73,334 15 16 17 18 19 20 21 22 23 24 25 Jaganathan, S. & Prakash, P. Bandara, K.et all. Nikzad, A. Montero-Manso, P. et. all. Smyl, S. Trotta, B. Tartu M4 seminar Pe ka, P. AFM-PS-ANN-1 AFM-PS-ANN-3 Alves Santos Junior, J. G. AFM-SMNM-ANN-3 AFM-SMNM-ANN-1 Bontempi, G.

  32. Team Members Damped Com SES Theta Naive2 Naive sNaive AFM-PS-ANN-1 AFM-PS-ANN-3 AFM-SMNM-ANN-3 AFM-SMNM-ANN-1 Holt AFM-DNM-ANN-1 AFM-DNM-ANN-3 AFM-SMNM-ANN-4 AFM-PS-ANN-4 RNN MLP AFM-DNM-ANN-4 Weekly 8,866 8,944 9,012 9,093 9,161 9,161 9,161 9,228 9,267 9,538 9,6682 9,708 10,4499 11,031 14,858 15,151 15,220 21,349 23,445 Rank 1 2 3 4 5 6 7 8 9 Rank=42 10 11 12 13 14 15 16 17 18 19

  33. Team Members Montero-Manso, P. et all. Pawlikowski, M., et. all. Petropoulos, F. & Svetunkov, I. Doornik, J., Hendry, D. & Castle, J. Smyl, S. Legaki N. Z. & Koutsouri K. ETS Comb Fiorucci, J. A. & Louzada, F. SMNM-1 Pedregal, D.J. et all. Naive2 Jaganathan, S. & Prakash, P. SMNM-3 PS-1 PS-3 DNM-1 DNM-3 Shaub, D. SMNM-4 PS-4 DNM-4 Weekly 4,3476 4,4771 4,5767 4,5839 4,6484 5,0300 5,0596 5,0984 5,1295 5,1437 5,1482 5,1789 5,1792 5,2941 5,3643 5,5052 5,5495 5,7683 6,1939 10,6375 10,7701 15,9681 Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

  34. M4 Competition Daily Data Results Team Members Daily Rank Team Members Daily Rank 2,852 2,959 3,037 3,048 3,087 3,097 3,170 3,201 3,204 3,212 3,246 3,266 3,312 3,457 1 AFM-DNM-ANN-1 2 Bontempi, G. 3 Viole, F. & Vinod, H. 4 Pe ka, P. 5 RNN 6 AFM-SMNM-ANN-4 7 AFM-PS-ANN-4 8 Mukhopadhyay, S. 9 MLP 10 AFM-DNM-ANN-4 11 Dudek, G. 12 13 14 3,4666 3,541 5,498 5,724 5,964 7,864 8,210 8,443 9,321 13,738 53,075 15 16 17 18 19 20 21 22 23 24 25 Tartu M4 seminar Bandara, K. et all. Jaganathan, S. & Prakash, P. Alves Santos Junior, J. G. Kyriakides, I. & Artusi, A. Montero-Manso, P.et. all. Smyl, S. AFM-PS-ANN-3 AFM-SMNM-ANN-1 AFM-PS-ANN-1 AFM-SMNM-ANN-3 Nikzad, A. Trotta, B. AFM-DNM-ANN-3

  35. Team Members Com SES Naive2 Naive sNaive Theta Damped Holt AFM-PS-ANN-3 AFM-SMNM-ANN-1 AFM-PS-ANN-1 AFM-SMNM-ANN-3 AFM-DNM-ANN-3 AFM-DNM-ANN-1 RNN AFM-SMNM-ANN-4 AFM-PS-ANN-4 MLP AFM-DNM-ANN-4 Daily Rank 2,980 3,045 3,045 3,045 3,045 3,053 3,064 3,066 3,201 3,204 3,212 3,246 3,457 3,4666 5,964 7,864 8,210 9,321 13,738 1 2 3 4 5 6 7 8 9 Rank=32 10 11 12 13 14 15 16 17 18 19

  36. Team Members Pawlikowski, M. et all. Pedregal, D.J.et. all. Comb Fiorucci, J. A. & Louzada, F. Legaki N. Z. & Koutsouri K. Naive2 ETS Doornik, J., Hendry, D. & Castle, J. Petropoulos, F. & Svetunkov, I. Jaganathan, S. & Prakash, P. PS-3 PS-1 DNM-1 SMNM-3 SMNM-1 Montero-Manso, P. et all. DNM-3 Smyl, S. Shaub, D. SMNM-4 PS-4 DNM-4 Daily Rank 1,4287 1,9666 1,9718 1,9792 1,9888 1,9889 1,9912 1,9969 1,9977 2,0051 2,0143 2,0194 2,0232 2,0330 2,0336 2,0415 2,0518 2,0991 2,1773 4,1683 4,4361 6,4365 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

  37. Comparison with the best method AFM-SMNM-ANN-3 Smyl, S. Method (Winner) 69.08% 62.79% 48.75% 51.25% 37.21% 30.92% DAILY HOURLY WEEKLY

  38. Conclusions Strategy 1 and Strategy 3 are the best strategies. Strategy 3 brings a statistical perspective to ANN models. Strategy 2 is tried for CIF2016 and it is not better than Strategy 1 and 3. Strategy 4 looks generally the worst strategy. Performances of SMNM, PS and DNM ANNs are better than MLP and RNN benchmarks. Performance of the strategies are varying according to ANN type and data type. Pre-processing methods are important for the performance of ANNs. SMNM can produce competitive results against the winner method.

  39. Future Researches for SMNM, PS and DNM Outliers and change points did not take into consideration in the strategies. This can be increase the performance. Ensemble strategies are useful in M4 and can be considered. Different pre-processing methods can be considered. Global models can be obtained by using SMNM,PS and DNM. Hybridization of statistical methods and SMNM,PS and DNM can be useful.

  40. References Yadav, R.N., Kalra, P.K., & John, J. (2007). Time series prediction with single multiplicative neuron model. Applied Soft Computing, 7, 1157-1163. Y. Todo, H. Tamura, K. Yamashita, and Z. Tang, Unsupervised learnable neuron model with nonlinear interaction on dendrites, NeuralNetworks, vol. 60, pp. 96 103, Dec. 2014. Shin, Y., Gosh, J. (1991). The Pi sigma Network: An efficient higher order neural network for pattern classification and function approximation. In Proceedings of the International Joint Conference on Neural Networks. Mohammadi, S. (2018). A new test for the significance of neural network inputs. Neurocomputing, 273, 304-322. Makridakis, S. Spiliotis, E., & Assimakopoulos, V. (2020). The M4 Competition: 100,000 time series and 61 forecasting methods, International Journal of Forecasting, 36, 54 74.

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#