Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    Computational Nanocharacterization for Combinatorially Developed Bulk Metallic Glass

    Bulk metallic glasses synthesized at specialized facilities at Yale using magnetron cosputtering are sent to Southern Connecticut State University for elemental characterization. Characterization is done using a Zeiss Sigma VP SEM coupled with an Oxford EDS. Characterization is automated using control software provided by Oxford. Collected data is processed and visualized using computational methods developed internally. Processed data is then organized into a database suitable for web retrieval. This technique allows for the rapid characterization of a combinatorial wafer to be carried out in ~11 hours for a single wafer containing ~600 unique compounds.

  • articleNo Access

    Renormalized multicanonical sampling

    For a homogeneous system divisible into identical, weakly interacting subsystems, the multicanonical procedure can be accelerated if it is first applied to determine the density of states for a single subsystem. This result is then employed to approximate the state density of a subsystem with twice the size that forms the starting point of a new multicanonical iteration. Since this compound subsystem interacts less on average with its environment, iterating this sequence of steps rapidly generates the state density of the full system.

  • articleNo Access

    Accelerated rare event sampling

    A sampling procedure for the transition matrix Monte Carlo method is introduced that generates the density of states function over a wide parameter range with minimal coding effort.

  • articleNo Access

    Asynchronous Algorithms for Computing Equilibrium Prices in a Capital Asset Pricing Model

    In this paper, we extend the work of [Tong, J, J Hu and J Hu (2017). Computing equilibrium prices for a capital asset pricing model with heterogeneous beliefs and margin-requirement constraints. European Journal of Operational Research, 256(1), 24–34] and develop various asynchronous algorithms to calculate the equilibrium asset prices in a heterogeneous capital asset pricing model. These asynchronous algorithms are based on different asynchronous updating schemes such as delayed updating, cyclic updating, fixed-length updating and random updating. In addition to potential benefits of improving computational efficiency, these asynchronous updating schemes also reflect several scenarios in financial markets in which investors may receive asset pricing information with various degrees of delays and their preferences on how and when to rebalance their portfolios may also be different. The proofs for the convergence of these algorithms are given. Numerical experiments are also provided to compare these algorithms and they show that these asynchronous algorithms work quite well.

  • articleNo Access

    BIFURCATIONS IN NUMERICAL METHODS FOR VOLTERRA INTEGRO-DIFFERENTIAL EQUATIONS

    We are interested in finding approximate solutions to parameter-dependent Volterra integro-differential equations over long time intervals using numerical schemes. This paper concentrates on changes in qualitative behavior (bifurcations) in the solutions and extends the work of Brunner and Lambert and Matthys (who considered only changes in stability behavior) to consider other bifurcations. We begin by considering a one-parameter equation with fading memory separable convolution kernel: we give an analytical discussion of bifurcations in this case and provide details of the behavior of numerical schemes. We extend our analysis to consider an equation with two-parameter fading memory convolution kernel and show the relationship to the classical test equation studied by the earlier authors. We draw attention to the fact that known stability results may not provide a reliable framework for choice of numerical scheme when other changes in qualitative behavior are also of interest. We give bifurcation plots for a variety of methods and show how, for known values of the parameters, stepsizes h>0 may be chosen to preserve the correct qualitative behavior in the numerical solution of the Volterra integro-differential equation.

  • articleNo Access

    Asymptotic behaviour of solutions to the stationary Navier–Stokes equations in two-dimensional exterior domains with zero velocity at infinity

    We investigate analytically and numerically the existence of stationary solutions converging to zero at infinity for the incompressible Navier–Stokes equations in a two-dimensional exterior domain. Physically, this corresponds for example to fixing a propeller by an external force at some point in a two-dimensional fluid filling the plane and to ask if the solution becomes steady with the velocity at infinity equal to zero. To answer this question, we find the asymptotic behaviour for such steady solutions in the case where the net force on the propeller is nonzero. In contrast to the three-dimensional case, where the asymptotic behaviour of the solution to this problem is given by a scale invariant solution, the asymptote in the two-dimensional case is not scale invariant and has a wake. We provide an asymptotic expansion for the velocity field at infinity, which shows that, within a wake of width |x|2/3, the velocity decays like |x|-1/3, whereas outside the wake, it decays like |x|-2/3. We check numerically that this behaviour is accurate at least up to second order and demonstrate how to use this information to significantly improve the numerical simulations. Finally, in order to check the compatibility of the present results with rigorous results for the case of zero net force, we consider a family of boundary conditions on the body which interpolate between the nonzero and the zero net force case.

  • articleOpen Access

    EDITORIAL SPECIAL ISSUE: PART IV-III-II-I SERIES: FRACTALS-FRACTIONAL AI-BASED ANALYSES AND APPLICATIONS TO COMPLEX SYSTEMS

    Fractals01 Jan 2023

    Complex systems, as interwoven miscellaneous interacting entities that emerge and evolve through self-organization in a myriad of spiraling contexts, exhibit subtleties on global scale besides steering the way to understand complexity which has been under evolutionary processes with unfolding cumulative nature wherein order is viewed as the unifying framework. Indicating the striking feature of non-separability in components, a complex system cannot be understood in terms of the individual isolated constituents’ properties per se, it can rather be comprehended as a way to multilevel approach systems behavior with systems whose emergent behavior and pattern transcend the characteristics of ubiquitous units composing the system itself. This observation specifies a change of scientific paradigm, presenting that a reductionist perspective does not by any means imply a constructionist view; and in that vein, complex systems science, associated with multiscale problems, is regarded as ascendancy of emergence over reductionism and level of mechanistic insight evolving into complex system. While evolvability being related to the species and humans owing their existence to their ancestors’ capability with regards to adapting, emerging and evolving besides the relation between complexity of models, designs, visualization and optimality, a horizon that can take into account the subtleties making their own means of solutions applicable is to be entailed by complexity. Such views attach their germane importance to the future science of complexity which may probably be best regarded as a minimal history congruent with observable variations, namely the most parallelizable or symmetric process which can turn random inputs into regular outputs. Interestingly enough, chaos and nonlinear systems come into this picture as cousins of complexity which with tons of its components are involved in a hectic interaction with one another in a nonlinear fashion amongst the other related systems and fields. Relation, in mathematics, is a way of connecting two or more things, which is to say numbers, sets or other mathematical objects, and it is a relation that describes the way the things are interrelated to facilitate making sense of complex mathematical systems. Accordingly, mathematical modeling and scientific computing are proven principal tools toward the solution of problems arising in complex systems’ exploration with sound, stimulating and innovative aspects attributed to data science as a tailored-made discipline to enable making sense out of voluminous (-big) data. Regarding the computation of the complexity of any mathematical model, conducting the analyses over the run time is related to the sort of data determined and employed along with the methods. This enables the possibility of examining the data applied in the study, which is dependent on the capacity of the computer at work. Besides these, varying capacities of the computers have impact on the results; nevertheless, the application of the method on the code step by step must be taken into consideration. In this sense, the definition of complexity evaluated over different data lends a broader applicability range with more realism and convenience since the process is dependent on concrete mathematical foundations. All of these indicate that the methods need to be investigated based on their mathematical foundation together with the methods. In that way, it can become foreseeable what level of complexity will emerge for any data desired to be employed. With relation to fractals, fractal theory and analysis are geared toward assessing the fractal characteristics of data, several methods being at stake to assign fractal dimensions to the datasets, and within that perspective, fractal analysis provides expansion of knowledge regarding the functions and structures of complex systems while acting as a potential means to evaluate the novel areas of research and to capture the roughness of objects, their nonlinearity, randomness, and so on. The idea of fractional-order integration and differentiation as well as the inverse relationship between them lends fractional calculus applications in various fields spanning across science, medicine and engineering, amongst the others. The approach of fractional calculus, within mathematics-informed frameworks employed to enable reliable comprehension into complex processes which encompass an array of temporal and spatial scales notably provides the novel applicable models through fractional-order calculus to optimization methods. Computational science and modeling, notwithstanding, are oriented toward the simulation and investigation of complex systems through the use of computers by making use of domains ranging from mathematics to physics as well as computer science. A computational model consisting of numerous variables that characterize the system under consideration allows the performing of many simulated experiments via computerized means. Furthermore, Artificial Intelligence (AI) techniques whether combined or not with fractal, fractional analysis as well as mathematical models have enabled various applications including the prediction of mechanisms ranging extensively from living organisms to other interactions across incredible spectra besides providing solutions to real-world complex problems both on local and global scale. While enabling model accuracy maximization, AI can also ensure the minimization of functions such as computational burden. Relatedly, level of complexity, often employed in computer science for decision-making and problem-solving processes, aims to evaluate the difficulty of algorithms, and by so doing, it helps to determine the number of required resources and time for task completion. Computational (-algorithmic) complexity, referring to the measure of the amount of computing resources (memory and storage) which a specific algorithm consumes when it is run, essentially signifies the complexity of an algorithm, yielding an approximate sense of the volume of computing resources and seeking to prove the input data with different values and sizes. Computational complexity, with search algorithms and solution landscapes, eventually points toward reductions vis à vis universality to explore varying degrees of problems with different ranges of predictability. Taken together, this line of sophisticated and computer-assisted proof approach can fulfill the requirements of accuracy, interpretability, predictability and reliance on mathematical sciences with the assistance of AI and machine learning being at the plinth of and at the intersection with different domains among many other related points in line with the concurrent technical analyses, computing processes, computational foundations and mathematical modeling. Consequently, as distinctive from the other ones, our special issue series provides a novel direction for stimulating, refreshing and innovative interdisciplinary, multidisciplinary and transdisciplinary understanding and research in model-based, data-driven modes to be able to obtain feasible accurate solutions, designed simulations, optimization processes, among many more. Hence, we address the theoretical reflections on how all these processes are modeled, merging all together the advanced methods, mathematical analyses, computational technologies, quantum means elaborating and exhibiting the implications of applicable approaches in real-world systems and other related domains.

  • articleNo Access

    Mechanisms defining the action potential abnormalities in simulated amyotrophic lateral sclerosis

    The present study investigates action potential abnormalities obtained in simulated cases of three progressively greater degrees of uniform axonal dysfunctions. The kinetics of the currents, defining the action potential propagation through the human motor nerve in the normal and abnormal cases, are also given and discussed. These computations use our previous multi-layered model of the myelinated motor axon, without taking into account the aqueous layers within the myelin sheath. The results show that the classical "transient" Na+ current contributes mainly to the action potential generation in the nodal segments, as the contribution of the nodal fast and slow potassium currents to the total nodal ionic current is negligible. However, the ionic channels beneath the myelin sheath are insensitive to the short-lasting current stimuli and do not contribute to action potential generation in the internodal compartments along the fibre length. The slight changes obtained in the currents underlying the generated action potentials in the three amylotropic lateral sclerosis cases are consistent with the effect of uniform axonal dysfunction along the fibre length. Nevertheless that the uniform axonal dysfunction progressively increases in the nodal and internodal segments of each next simulated amylotropic lateral sclerosis case, the action potentials cannot be regarded as definitive indicators for the progressive degrees of this disease.

  • articleNo Access

    Approaches for the identification of driver mutations in cancer: A tutorial from a computational perspective

    Cancer is a complex disease caused by the accumulation of genetic alterations during the individual’s life. Such alterations are called genetic mutations and can be divided into two groups: (1) Passenger mutations, which are not responsible for cancer and (2) Driver mutations, which are significant for cancer and responsible for its initiation and progression. Cancer cells undergo a large number of mutations, of which most are passengers, and few are drivers. The identification of driver mutations is a key point and one of the biggest challenges in Cancer Genomics. Many computational methods for such a purpose have been developed in Cancer Bioinformatics. Such computational methods are complex and are usually described in a high level of abstraction. This tutorial details some classical computational methods, from a computational perspective, with the transcription in an algorithmic format towards an easy access by researchers.

  • articleNo Access

    Prokaryote autoimmunity in the context of self-targeting by CRISPR-Cas systems

    Prokaryote adaptive immunity (CRISPR-Cas systems) can be a threat to its carriers. We analyze the risks of autoimmune reactions related to adaptive immunity in prokaryotes by computational methods. We found important differences between bacteria and archaea with respect to autoimmunity potential. According to the results of our analysis, CRISPR-Cas systems in bacteria are more prone to self-targeting even though they possess fewer spacers per organism on average than archaea. The results of our study provide opportunities to use self-targeting in prokaryotes for biological and medical applications.

  • articleNo Access

    COMPUTATIONAL METHODS AND MODELS NEEDED FOR SUSTAINABILITY RESEARCH BASED ON A CARBON-DIOXIDE BALANCE THEORY

    This short article introduces first a Carbon-dioxide Balance Theory for sustainability of creatures living in the Earth atmosphere. A brief discussion then is presented on the importance of such a theory in policy making. An appeal is issued to all IJCM readers and author for making more efforts in creating computational methods and models to deal with sustainability issues.

  • articleNo Access

    Computational Methods for Simulating Some Typical Problems in Computational Geosciences

    The main purpose of this paper is to present computational methods for simulating some typical problems in the emerging computational geoscience field. Due to remarkable differences between engineering systems and Earth ones, existing computational methods, which are designed for solving engineering problems, cannot be directly used to solve geoscience problems without any modification. However, the fundamental philosophy of developing computational methods is applicable to the computational simulation of both geoscience and engineering problems. Because of their inherent approximation, computational methods must be verified before putting into application. After several computational methods and algorithms, which are developed for simulating some typical problems in the emerging computational geoscience field, are briefly introduced, a typical geoscience problem, known as the chemical dissolution-front instability problem in ore-forming systems of supercritical Zhao numbers, is selected to demonstrate how computational methods can be used to solve geoscience problems.

  • articleNo Access

    Another Existence and Uniqueness Proof for the Higman–Sims Simple Group

    In this article, we give a short proof for the existence and uniqueness of the Higman–Sims sporadic simple group 𝖧𝖲 by means of the first author's algorithm [17] and uniqueness criterion [18], respectively. We realize 𝖧𝖲 as a subgroup of GL22(11), and determine its automorphism group Aut(𝖧𝖲). We also give a presentation for Aut(𝖧𝖲) in terms of generators and relations. Furthermore, the character table of 𝖧𝖲 is determined and representatives of its conjugacy classes are given as short words in its generating matrices inside GL22(11).

  • articleNo Access

    INTEGRATED ALGORITHMS FOR IMAGE ANALYSIS AND CLASSIFICATION OF NUCLEAR DIVISION FOR HIGH-CONTENT CELL-CYCLE SCREENING

    Advances in fluorescent probing and microscopic imaging technology provide important tools for biomedical research in studying the structures and functions of cells and molecules. Such studies require the processing and analysis of huge amounts of image data, and manual image analysis is very time consuming, thus costly, and also potentially inaccurate and poor reproducibility. In this paper, we present and combine several advanced computational, probabilistic, and fuzzy-set methods for the computerized classification of cell nuclei in different mitotic phases. We tested our proposed methods with real image sequences recorded over a period of twenty-four hours at every fifteen minutes with a time-lapse fluorescence microscopy. The experimental results have shown that the proposed methods are effective for the task of classification.

  • chapterNo Access

    TECHNIQUES IN INFRARED MICROSPECTROSCOPY AND ADVANCED COMPUTATIONAL METHODS FOR COLON CANCER DIAGNOSIS

    Early diagnosis of cancer continues to be a major challenge in the field of cancer prevention and management as it can decrease the severity and also improve the life of patients. Rapid progress in the past few decades in molecular- and immunological-based methods has helped to alleviate the problem to a certain extent. However, a rapid, reagent free method to identify cancers in situ or ex vivo, to make screening objective and swift still remains unachieved. At this point, the utilization of methods based on IR spectroscopy is expected to provide the breakthrough and make diagnosis and grading of cancers/malignancies a simple and inexpensive affair. In the present chapter, we deal with the utilization of FTIR-microspectroscopy for the diagnosis of colon cancer, which is a preliminary step towards its implications for future oncology. We highlight a few technological aspects and also the enormous potential of IR spectroscopy in colon cancer specifically, and other cancers in general.

  • chapterNo Access

    SULFONYLUREAS AND GLINIDES AS NEW PPARγ AGONISTS: VIRTUAL SCREENING AND BIOLOGICAL ASSAYS

    This work combines the predictive power of computational drug discovery with experimental validation by means of biological assays. In this way, a new mode of action for type 2 diabetes drugs has been unvealed. Most drugs currently employed in the treatment of type 2 diabetes either target the sulfonylurea receptor stimulating insulin release (sulfonylureas, glinides), or target PPARγ improving insulin resistance (thiazolidinediones). Our work shows that sulfonylureas and glinides bind to PPARγ and exhibit PPARγ agonistic activity. This result was predicted in silico by virtual screening and confirmed in vitro by three biological assays. This dual mode of action of sulfonylureas and glinides may open new perspectives for the molecular pharmacology of antidiabetic drugs, since it provides evidence that drugs can be designed which target both the sulfonylurea receptor and PPARγ. Targeting both receptors could in principle allow to increase pancreatic insulin secretion, as well as to improve insulin resistance.

  • chapterNo Access

    Wavelet Study of Dynamical Systems Using Partial Differential Equations

    The recent development of partial differential equation theory shows that this approach is a powerful tool for the wavelet analysis of non-stationary time series including those generated by chaotic systems. The present paper reviews main directions, where it is applicable, formulates current open problems and possible ways to their solution.