Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In today’s fast-paced digital era, advertising design heavily depends on advanced visual communication techniques to capture and maintain consumer attention. By leveraging dynamic visuals, brands can effectively engage diverse audiences and create long-lasting connections with consumers. Traditional advertising design often lacks effectiveness due to subjective judgments. The challenge lies in combining aesthetics with audience engagement using algorithmic techniques for improved visual communication. The objective of this study is to explore the application of algorithm-driven visual communication strategies in advertising design, focusing on enhancing the effectiveness of visual content by aligning with audience preferences and behaviors. The collected data undergo preprocessing using the normalization technique. Feature extraction is performed using convolutional neural networks (CNNs) to analyze advertising background images, allowing for the selection of suitable visuals that resonate with the target audience. This study proposed an intelligent moth flame-optimized malleable-gated recurrent units (IMF-MGRU) method that synthesizes textual information to generate effective product taglines, enhancing the expressiveness of the advertising image. Malleable-gated recurrent units (MGRU) are utilized to create relevant taglines that align with the selected visuals, while the intelligent moth flame (IMF) optimizes the layout of elements within the image, minimizing overlap among key components. Experimental findings show that the suggested method enhances the appeal of advertising content, particularly highlighting considerable advantages in performance during the evaluation experiment. The proposed IMF-MGRU method represents a significant advancement in synthesizing visual and textual elements, resulting in cohesive and compelling advertising designs.
The beam power of the CEPC Collider is about 60 MW, so an efficiency of an RF power source is very important for cost of project implementation. The most popular source for an accelerator is a klystron, which has the advantage that it can be operated at high power with a reasonable high efficiency. IHEP is developing 650 MHz klystron with 800 kW CW output power and 80% efficiency. To reach this goal, a couple of klystron prototypes will be manufactured in the near future. The first prototype is completely manufactured by Institute of Electronics (IE) and GLVAC Company and the first step of high-power conditioning and commissioning is also completed in IHEP. The design schemes of high-efficiency klystron are also in progress.
Control over scale, dynamic, environment, and geometric errors in 5-axis machine tool are required to realize a high precision machine tool. Especially geometric errors such as translational, rotational, offset, and squareness errors are important factors which should be considered in the design stages of the machine tool. In this paper, geometric errors are evaluated for different configurations of 5-axis machine tool, namely, 1) table tilting, 2) head tilting, and 3) universal and their error synthesis models are derived. The proposed model is different from the conventional error synthesis model since it considers offset and offset errors. The volumetric error is estimated for every configuration with random geometric errors. Finally, the best configuration, the critical design parameter and error are suggested.
Following recent rapid development of researches in utilizing Magnetorheological (MR) fluid, a smart material that can be magnetically controlled to change its apparent viscosity instantaneously, a lot of applications have been established to exploit the benefits and advantages of using the MR fluid. One of the most important applications for MR fluid in devices is the MR valve, where it uses the popular flow or valve mode among the available working modes for MR fluid. As such, MR valve is widely applied in a lot of hydraulic actuation and vibration reduction devices, among them are dampers, actuators and shock absorbers. This paper presents a review on MR valve, discusses on several design configurations and the mathematical modeling for the MR valve. Therefore, this review paper classifies the MR valve based on the coil configuration and geometrical arrangement of the valve, and focusing on four different mathematical models for MR valve: Bingham plastic, Herschel–Bulkley, bi-viscous and Herschel–Bulkley with pre-yield viscosity (HBPV) models for calculating yield stress and pressure drop in the MR valve. Design challenges and opportunities for application of MR fluid and MR valve are also highlighted in this review. Hopefully, this review paper can provide basic knowledge on design and modeling of MR valve, complementing other reviews on MR fluid, its applications and technologies.
Based on the wind turbine's complex wake vortex system and combining with the wind machine's aerodynamic performances, a hybrid method to design the wind turbine rotor, which discards the constant circulation along the wind rotor blade axis and embraces the airfoil's lift-drag characteristics, is presented. And the validity of the method is also proven by designing and computing a wind rotor.
Even though AI technology is a relatively new discipline, many of its concepts have already found practical applications. Expert systems, in particular, have made significant contributions to technologies in such fields as business, medicine, engineering design, chemistry, and particle physics.
This paper describes an expert system developed to aid mechanical engineering designers in the preliminary design of variable-stroke internal-combustion engines. Variable-stroke engines are more economical in fuel consumption but their design is particularly difficult to accomplish. With the traditional design approach, synthesizing the mechanisms for the design is rather difficult and evaluating the mechanisms is an even more cumbersome and time-consuming effort. Our expert system assists the designer by generating and evaluating a large number of design alternatives represented in the form of graphs. Through the application of structural and design rules obtained from design experts to the graphs, good quality preliminary design configurations of the engines are promptly deduced. This approach can also be used in designing other types of mechanisms.
One goal of Artificial Intelligence is to develop and understand computational mechanisms for solving difficult real-world problems. Unfortunately, domains traditionally used in general problem-solving research lack important characteristics of real-world domains, making it difficult to apply the techniques developed. Most classic AI domains require satisfying a set of Boolean constraints. Real-world problems require finding a solution that meets a set of Boolean constraints and performs well on a set of real-valued constraints. In addition, most classic domains are static while domains from the real world change. In this paper we demonstrate that SteppingStone, a general learning problem solver, is capable of solving problems with these characteristics. SteppingStone heuristically decomposes a problem into simpler subproblems, and then learns to deal with the interactions that arise between the subproblems. In lieu of an agreed upon metric for problem difficulty, we choose significant problems that are difficult for both people and programs as good candidates for evaluating progress. Consequently we adopt the domain of logic synthesis from VLSI design to demonstrate SteppingStone’s capabilities.
The paper emphasizes methods, architectures, and components for system-on-chip design. It describes the basic knowledge and skills for designing high-performance low-power embedded devices whose complexity increases exponentially, as so does the effort of designing them. Relying upon an appropriate design methodology which concentrates on reuse, executable specifications, and early error detection, these complexities can be mastered. The paper bundles these topics in order to provide a good understanding of all the problems involved. It shows how to go from description and verification to implementation and testing, presenting three systems-on-chip for three different wireless applications based on configurable processors and custom hardware accelerators.
The complexity of algorithms implemented in digital systems grows. Methods are developed for most effective use of both hardware resources and energy. For engineers the problem of hardware resources optimization in design of control units is still an important issue. The standard way of implementing the control unit as a finite-state machine (FSM) is not satisfactory as it consumes considerable amounts of field-programmable gate arrays (FPGA) resources. This paper is devoted to the design of a Moore FSM in FPGA structure using look-up tables and embedded memory blocks (EMB) elements. The problem background is discussed. The method of the design of Moore FSM logic circuits with EMB based on splitting the set of logical conditions and the encoding of logical conditions is presented. Examples of design and research results are given.
This paper describes the Multiagent Systems Engineering (MaSE) methodology. MaSE is a general purpose, methodology for developing heterogeneous multiagent systems. MaSE uses a number of graphically based models to describe system goals, behaviors, agent types, and agent communication interfaces. MaSE also provides a way to specify architecture-independent detailed definition of the internal agent design. An example of applying the MaSE methodology is also presented.
Knowledge bases contain specific and general knowledge. In relational database systems, specific knowledge is often represented as a set of relations. The conventional methodologies for centralized database design can be applied to develop a normalized, redundancy-free global schema. Distributed database design involves redundancy removal as well as the distribution design which allows replicated data segments. Thus, distribution design can be viewed as a process on a normalized global schema which produces a collection of fragments of relations from a global database. Clearly, not every fragment of data can be permitted as a relation. In this paper, we clarify and formally discuss three kinds of fragmentations of relational databases, and characterize their features as valid designs, and we introduce a hybrid knowledge fragmentation as the general case. For completeness of presentation, we first show an algorithm for the validity test of vertical fragmentations of normalized relations, and then extend it to the more general case of unbiased fragmentations.
In this paper, we present a framework for maintainable software design, based on a multilevel understanding of software function. This framework is the novel functional representation ZD, in which domain concepts (e.g., employee records) as well as computational concepts (e.g., stacks and queues) are represented in a reusable manner. We use ZD to support the construction of a library of reusable components for novel configurations. In order to achieve this, we provide mechanisms for supporting flexible configurations, address problems of determining the correctness of a combination of library components, and consider the computational complexity of finding combinations. These problems all stem from problems of interactions between components. Therefore, the structure of ZD is focused on representing and handling these interactions. We show how the approach based on ZD resolves certain well-known problems faced by other library-based component reuse architectures.
This paper presents a method for modeling of communication protocols using G-Nets — an object-based Petri net formalism. Our approach focuses on specification of one entity in one node at one time, with the analysis that allows consideration of other layers and nodes in addition to module analysis. We extend G-Nets by the notion of timers, which aids the construction of protocol software models. Our method prevents some types of potential deadlocks and livelocks from being introduced into the produced net models. We present certain net synthesis rules to prevent some potential design errors by including error cases in the model. Thus, our node (site) interplay modeling includes cases in which a message may arrive corrupted or can be lost entirely before it would get to its destination node. Also, since our models have deadlock-preserving skeletons, the verification of global deadlock non-existence can be performed on the less complex skeleton rather than on the full G-Net model. Our analysis method discovers some deadlocks plus other unacceptable markings, which do not allow restoration of the initial state. Finding potential livelocks or overspecification is also a part of the analysis.
Voting Advice Applications (VAAs) are online tools that match the policy preferences of voters with the policy positions of political parties or candidates. Designed to enhance the political competence of citizens, VAAs have become increasingly popular and institutionally embedded in a growing number of European countries. While the traditional VAA relied on the stated position or academically coded position of parties/candidates, a recent innovation has been to introduce a social vote recommendation borrowing the basic principles of collaborative filtering. The latter takes advantage of the community of VAA users to provide a vote recommendation. This paper provides an overview of the social vote recommendation scheme and tackles three problems related to its optimal implementation in a real–world setting: (1) the number of samples required to train party models; (2) whether this number is affected by differences in characteristics between early users versus late users; and (3) whether generalizations can be derived across VAA applications in different countries. For our experiments we use three real VAA datasets based on elections in Greece 2012, Cyprus 2013 and Germany 2013. The corresponding datasets are made freely available to other researchers working in the areas of VAA and web based recommender systems.
The SHARE project seeks to apply information technologies in helping design teams gather, organize, re-access, and communicate both informal and formal design information to establish a "shared understanding" of the design and design process. This paper presents the visions of SHARE, along with the research and strategies undertaken to build an infrastructure toward its realization. A preliminary prototype environment is being used by designers working on a variety of industry sponsored design projects. This testbed continues to inform and guide the development of NoteMail, MovieMail, and Xshare, as well as other components of the next generation SHARE environment that will help distributed design teams work together more effectively on the Internet.
The application of reliability-based design optimization (RBDO) to degrading systems is challenging because of the continual interplay between calculating time-variant reliability (to ensure reliability policies are met) and moving the design point to optimize various objectives, such as cost, weight, size and so forth. The time needed for Monte Carlo Simulation (MCS) is lengthy when reliability calculations are required for each iteration of the design point. The common methods used to date to improve efficiency have some shortcomings: First, most approaches approximate probability via a method that invokes the most-likely failure point (MLFP), and second, tolerances are almost always excluded from the list of design parameters (hence only so-called parameter design is performed), and, without tolerances, true monetary costs cannot be determined, especially in manufactured systems. Herein, the efficiency of RBDO for degrading systems is greatly improved by essentially uncoupling the time-variant reliability problem from the optimization problem. First, a meta-model is built to relate time-variant reliability to the design space. Design of experiment techniques helps to select a few judicious training sets. Second, the meta-model is accessed to quickly evaluate objectives and reliability constraints in the optimization process. The set-theory approach (with MCS) is invoked to find the system reliability accurately and efficiently for multiple competing performance measures. For a case study, the seminal roller clutch with degradation due to wear is examined. The meta-model method, using both moving least-squares and kriging (using DACE in Matlab), is compared to the traditional approach whereby reliability is determined by MCS at each optimization iteration. The case study shows that both means and tolerances are found that correctly minimize a monetary cost objective and yet ensure that reliability policies are met. The meta-model approach is simple, accurate and very fast, suggesting an attractive means for RBDO of time-variant systems.
As the technology associated with the "Web Services" trend gains significant adoption, the need for a corresponding design approach becomes increasingly important. This paper introduces a foundational model for designing (composite) services. The innovation of this model lies in the identification of four interrelated viewpoints (interface behaviour, provider behaviour, choreography, and orchestration) and their formalization from a control-flow perspective in terms of Petri nets. By formally capturing the interrelationships between these viewpoints, the proposal enables the static verification of the consistency of composite services designed in a cooperative and incremental manner. A proof-of-concept simulation and verification tool has been developed to test the possibilities of the proposed model.
Like any other large and complex software systems, Service-Based Systems (SBSs) must evolve to fit new user requirements and execution contexts. The changes resulting from the evolution of SBSs may degrade their design and quality of service (QoS) and may often cause the appearance of common poor solutions in their architecture, called antipatterns, in opposition to design patterns, which are good solutions to recurring problems. Antipatterns resulting from these changes may hinder the future maintenance and evolution of SBSs. The detection of antipatterns is thus crucial to assess the design and QoS of SBSs and facilitate their maintenance and evolution. However, methods and techniques for the detection of antipatterns in SBSs are still in their infancy despite their importance. In this paper, we introduce a novel and innovative approach supported by a framework for specifying and detecting antipatterns in SBSs. Using our approach, we specify 10 well-known and common antipatterns, including Multi Service and Tiny Service, and automatically generate their detection algorithms. We apply and validate the detection algorithms in terms of precision and recall two systems developed independently, (1) Home-Automation, an SBS with 13 services, and (2) FraSCAti, an open-source implementation of the Service Component Architecture (SCA) standard with more than 100 services. This validation demonstrates that our approach enables the specification and detection of Service Oriented Architecture (SOA) antipatterns with an average precision of 90% and recall of 97.5%.
This paper presents our work in the design and development of collaborative platforms to support distributed scientific collaborations in a national biosecurity laboratory which carries out diagnostics and research work in animal diseases. We have focused on two types of collaboration challenges. One is the “distributed” collaborations between scientists working inside the physical containment areas and scientists working in the general office area within the laboratory. The second is the collaborative diagnosis and decision-making work between this laboratory and other organizations working on the responses of emergency animal diseases. The “biosecurity collaboration platform” which addresses the first challenge has been implemented and used by the scientists in the laboratory. The platform integrates shared digital workspaces and supports the sharing and interaction of scientific data from various resources and laboratory instruments (e.g. microscopes). The “secure collaboration platform” which addresses the second challenge is an extension of the biosecurity collaboration platform and integrates eAuthentication and eAuthorization technologies to support secure communication and information sharing between experts from different organizations. Results from user studies have shown that the collaboration platforms can provide core capabilities of communication, trustworthy information sharing and access to real-time data from scientific instruments in complex collaborations in the biosecurity domain.
The article is on a survey of knowledge, attitudes and the usage of complementary and alternative medicine in Singapore. It explains the methods, the results and the conclusion of the experiment.