Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Federated learning (FL) has been proposed to enable distributed learning on artificial intelligence Internet of Things (AIoT) devices with guarantees of high-level data privacy. Since random initial models in FL can easily result in unregulated stochastic gradient descent (SGD) processes, existing FL methods greatly suffer from both slow convergence and poor accuracy, especially in non-IID scenarios. To address this problem, we propose a novel method named CyclicFL, which can quickly derive effective initial models to guide the SGD processes, thus improving the overall FL training performance. We formally analyze the significance of data consistency between the pre-training and training stages of CyclicFL, showing the limited Lipschitzness of loss for the pre-trained models by CyclicFL. Moreover, we systematically prove that our method can achieve a faster convergence speed under various convexity assumptions. Unlike traditional centralized pre-training methods that require public proxy data, CyclicFL pre-trains initial models on selected AIoT devices cyclically without exposing their local data. Therefore, they can be easily integrated into any security-critical FL methods. Comprehensive experimental results show that CyclicFL can not only improve maximum classification accuracy by up to 14.11%, but also significantly accelerate the overall FL training process.
This study emphasizes the importance of meeting key assumptions in multiple regression analysis, specifically using the GDP data from the construction sector. Reliable results depend on assumptions like the normality of errors, constant variance and independence. Our research shows that an initial model failing to meet these assumptions produced an inflated R2 of 97%, misleadingly suggesting that it explained most GDP variance. After adjusting the model to fulfill the assumptions, the R2 dropped to 26%. This significant decline illustrates how assumption violations distort the coefficient of determination, highlighting the need for thorough assumption validation for accurate economic analysis and informed policymaking.
Surrogate models are commonly used in place of computationally expensive simulations in engineering design and optimization, and the predictive performance of surrogate models is usually influenced by the quality of design of experiments (DoE). One way to eliminate the effect of the randomness of DoE is to average multiple prediction accuracies over multiple DoEs. However, how many DoEs are needed to obtain stable prediction results for problems with different dimensionalities remains a challenging issue. Mathematical test functions have been employed in a large body of literatures to identify the predictive performance of surrogate models. In this work, 30 test functions ranging from 1 dimension to 16 dimensions are selected to investigate the relationship between the number of DoEs needed for a stable prediction accuracy and the number of sample points. A convergence condition is used to determine whether a reliable model accuracy has been obtained. In this paper, the number of DoEs required for estimating the model accuracy is provided as a suggestion for those who develop surrogate models and select test functions to validate the performance of models.