Javascript is required
Search

Acadlore takes over the publication of IJCMEM from 2025 Vol. 13, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.

This issue/volume is not published by Acadlore.
Volume 5, Issue 5, 2017

Abstract

Full Text|PDF|XML

Isosurfaces are an appropriate approach to visualize scalar fields or the absolute value of vector fields in three dimensions. The nodes of the corresponding isosurface mesh are determined using an efficient and accurate isovalue search method. Then, these nodes are typically connected by triangular elements, which are obtained with the help of an adapted advancing front algorithm. An important prerequisite of an isovalue search method is that volume data of the examined field is available in total space. That means, the field values are precomputed in the nodes of an auxiliary post-processing volume mesh or a novel meshfree method is developed that enables both efficient computations of field values in arbitrary points and fast determination of domains with a defined range of field values. If the first approach is applied, a classical isovalue search method is to use an octree scheme to find relevant volume elements, which are intersected by the isosurface. Finally, the surface elements of the isosurface are constructed based on the intersection points of the isosurface with the volume elements. In that case, the accuracy and the computational costs are mainly influenced by the density of the post-processing volume mesh. In contrast, an innovative coupling of established isovalue search methods, fast boundary element method (BEM) techniques, and advancing front meshing algorithms is here presented to compute isosurfaces with high accuracy only using the original BEM model. This novel meshfree method enables very accurate isovalue search methods along with nearly arbitrarily adjustable resolution of the computed isosurface. Furthermore, refinements of the isosurface are also possible, for instance in dependency of the current viewing position. The main idea to realize this meshfree method is to directly combine an octree-based isovalue search method with the octree-based fast multipole method (FMM).

Abstract

Full Text|PDF|XML

Cathodic protection (CP) is a technique that prevents corrosion of underground metallic structures. Design of any CP system first requires defining the protection of current density and potential distribution, which should meet the given criterion. It also needs to provide, as uniform as possible, current density distribution on the protected object surface. Determination of current density and potential distribution of CP system is based on solving the Laplace partial differential equation. Mathematical model, along with the Laplace equation, is represented by two additional equations that define boundary conditions. These two equations are non-linear and they represent the polarization curves that define the relationship between current density and potential on electrode surfaces. Nowadays, the only reliable way to determine current density and potential distribution is by applying numerical techniques. This paper presents efficient numerical techniques for the calculation of current density and potential distribution of CP system based on the coupled boundary element method (BEM) and finite element method (FEM).

Abstract

Full Text|PDF|XML

This paper deals with a shape optimization of pump suction, with the objective of improving the pump performance. A combination of ANSYS CFX software tools and a surrogate-based, so-called multistart local metric stochastic RBF (MLMSRBF) method for the global optimization of “expensive black-box functions” is employed. The shape of the suction is driven by 18 geometric parameters, and the cost functional is based on the CFD results. The practical aspects of assembling and evaluating a parametric CFD model, including mesh independence study, are shown. After initial design of experiment evaluation, a response surface model is created and used for generating new sample points for the expensive CFD evaluation. Then, the whole process is repeated as long as necessary. A parallel version of the method is used, with necessary modifications for dealing with CFD-specific problems, such as failed designs and uncertainty of computational times. Both steady-state and transient models are used for the optimization, each with a different objective function. The resulting designs are then compared with the original geometry, using a complete model of the pump and fully-transient simulation. Both hydraulic characteristics and multiphase cavitational simulations are considered for the comparison. At the end, the results and challenges of using these methods for CFD-driven shape optimization are discussed.

Abstract

Full Text|PDF|XML

The paper deals with time domain-deterministic stochastic assessment of a transient electric field generated by a ground penetrating radar (GPR) dipole antenna and transmitted into a lower half-space. The deterministic time domain formulation is based on the space-time Hallen integral equation for half-space problems. The Hallen equation is solved via the Galerkin–Bubnov variant of the Indirect Boundary Element Method (GB-IBEM) and the space-time current distribution along the dipole antenna is obtained, thus providing the field calculation. The field transmitted into the lower medium is obtained by solving the corresponding field integrals.

As GPR systems are subjected to a rather complex environment, some input parameters, for example the antenna height over ground or earth properties, are partly or entirely unknown and, therefore, a simple stochastic collocation (SC) method is used to properly access relevant statistics about GPR time responses. The SC approach also aids in the assessment of corresponding confidence intervals from the set of obtained numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via the SC method, is shown to be a robust and efficient approach providing a satisfactory convergence rate.

Abstract

Full Text|PDF|XML

In this study, we develop an efficient topology optimisation method with the H -matrix method and the boundary element method (BEM). In sensitivity analyses of topology optimisation, we need to solve a set of two algebraic equations whose coefficient matrices are common, particularly in many cases. For such cases, by using a direct solver such as LU decomposition to factorise the coefficient matrix, we can reduce the computational time for the sensitivity analysis. A coefficient matrix derived by the BEM is, however, fully populated, which causes high numerical costs for the LU decomposition. In this research, the LU decomposition is accelerated by using the H -matrix method for the sensitivity analyses of topology optimisation problems. We demonstrate the efficiency of the proposed method by a numerical example of a multi-objective optimisation problem for 2D electromagnetic field.

Abstract

Full Text|PDF|XML

This paper discussed about the consequences of using different filler metal by metal inert gas (MIG) welding process on aluminium alloys Al 7075 sheet metal joint. Nowadays, Al 7075 is widely used in automobile and aviation industry due to its light weight, strong, and high hardness. Fusion welding, such as MIG and TIG were commonly used in joining the aluminium alloys due to its low cost. However, defects usually occurred using fusion welding because of the inaccurate welding parameters and types of filler metal used. The purpose of this study is to determine whether the filler metal with different elements and welding parameters affect the mechanical properties of welded Al 7075. Welding parameters used were current, voltage, welding speed, and Argon (Ar) as shielding gas. Two different types of filler metal were used which is Electrode Rod (ER) 4043 and ER5356 which is from Al-Si and Al-Mg based element, respectively. From microstructure analysis, fusion zone (FZ) of sample welded with ER4043 has a smaller grain size than that of with ER5356. Both filler produced equiaxed dendritic grain at FZ. Both samples welded with ER4043 and ER5356 has lower hardness value than heat affected zone (HAZ) and base metal (BM) due to the differences in their elements where ER4043 from Al-Si and ER5356 from Al-Mg group. The weld efficiency of sample welded using ER5356 was 61% which was higher compared to sample welded using ER4043 which at 43% and both sample was brittle fractured. Sample welded with ER5356 was fractured at HAZ due to porosity while sample welded with ER4043 fractured at FZ due to the oxide inclusion.

Abstract

Full Text|PDF|XML

Polyhydroxyalkanoates (PHAs) are a family of biodegradable and biocompatible polyesters that have recently attracted much industrial attention. The most representative PHA is poly(3-hydroxybutyrate) (PHB), though it presents several shortcomings such as brittleness and poor impact resistance. 3-hydroxy- hexanoate units can be incorporated in PHB to obtain poly(3-hydroxybutyrate-co-3-hydroxyhexanoate) (PHBHHx), a copolymer with improved mechanical properties, processability and biodegradability, more suitable for biomedical applications. In this study, chitosan-grafted polycaprolactone (CS-g- PCL)/PHBHHx fiber blends in different compositions were developed by wet electrospinning, and their morphology, biodegradability, mechanical and tribological properties were investigated. A direct correlation was found between the wear rate and the mechanical properties, pointing that fiber breakage is the mechanism responsible for both the abrasive wear and yield. The interactions between the components led to a synergistic effect on tensile and tribological properties at a blend composition of 70/30, resulting in an optimum combination of maximum stiffness, strength, ductility and toughness and minimum coefficient of friction and wear rate, ascribed to the lower porosity and higher crystallinity of this sample. Further, it exhibits the slowest degradation rate. These fiber blends are ideal candidates as scaffolds for tissue engineering applications.

Abstract

Full Text|PDF|XML

The design and optimisation of a latent heat thermal storage system require knowledge of flow, heat and mass transfer during the melting (charging) and solidification (discharging) processes of high-temperature phase change materials (PCMs). Using fluent, numerical modeling was performed to study the impact of natural convection and turbulence in the melting process of a high- temperature PCM in a latent heat storage system with Ra = 1012. Numerical calculation was conducted, considering a two dimensional symmetric grid of a dual-tube element in a parallel flow shell and tube configuration where the heat transfer fluid passes through the tube and PCM fills the shell. Three melting processes of PCM were considered; pure conduction, conduction and natural convection, and finally the latter with turbulence. The first study showed a one dimensional melt front, evolving parallel to the tube, which results in lower peak temperatures and temperature gradients, higher heat transfer area for a longer period of time, however lower heat transfer rate due to natural convection being ignored. The second study presented a two dimensional melt front which evolves mainly perpendicular to the tube, shrinking downward, resulting in the loss of heat transfer area and higher peak temperatures and temperature gradient, however, the higher rate of heat transfer rate due to the creation of convection cells which facilitate mass and heat transfer. Including turbulence led to a higher mixing effect due to the higher velocity of convection cells, resulting in a more uniform process with lower peak temperature and temperature gradients and higher heat transfer rate. In a melting process with Ra>1011, including convection and turbulence impact provides more realistic data of flow, mass and heat transfer.

Abstract

Full Text|PDF|XML

An axle bearing is one of the most important components to guarantee the service life of a rail car. In order to ensure the stable and reliable bearing life, it is essential to estimate the fatigue life of an axle bearing under the loading conditions. The fatigue life of a bearing is affected by many parameters such as material properties, heat treatment, lubrication conditions, operating temperature, loading conditions, bearing geometry, the internal clearance of bearing, and so on. Because these factors are so complicatedly related to each other, it is very important to investigate the effects of these factors on the axle bearing life. This paper presents the process of estimating the fatigue life of a railroad roller bearing, which takes into account geometric parameters of the bearing in the life calculation. The load distributions of the bearing were determined by solving numerically force and moment equilibrium equations with Lundberg’s approximate model. This paper focuses on analyzing the effects of bearing geometric parameters on the fatigue life using Taguchi method.

Open Access
Research article
Factors Influencing the Retreat of the Coastline
m. lópez ,
j.i. pagán ,
i. lópez ,
a.j. tenza-abril ,
j. garcía-barba
|
Available online: 08-31-2027

Abstract

Full Text|PDF|XML

One of the main problems of coastlines around the world is their erosion. There are many studies that have tried to link coastal erosion with different parameters such as: maritime climate, sediment transport, sea level rise etc. However, it is unclear to what extent these factors influence coastal erosion. For example, the Intergovernmental Panel on Climate Change (IPCC) has predicted an increase in sea level at a much faster rate than that experienced in the first part of this century, reaching 1 m of elevation in some areas. Another factor to consider is the lack of sediment supply, since currently the contribution of new sediments from rivers or ravines is interrupted by anthropic activities carried out in their basins (dams, channelling, etc.). The big storms, increasingly frequent due to climate change, also should be considered, since they produce an off-shore sediments transport, so that these cross the depth of clo- sure, causing nonreturn of the sediment to the beach. Also, the sediment undergoes a process of wear due to various reasons such as the dissolution of the carbonate fraction and/or breakage and separation of the components of the particles. All these elements, to a greater or lesser extent, lead to the retreat of the coastline. Therefore, the aim of this study is to analyse the different factors causing the retreat of the coastline, in order to determine the degree of involvement of each of them and, therefore, be able to pose different proposals to reduce the consequences of coastal erosion.

Abstract

Full Text|PDF|XML

The depth of closure of the beach profile, from now on termed as DoC, is a key parameter to perform effective evaluations of beach nourishments or coastal defence works. It is defined for a given time interval, as the closest depth to the shore at which there is no significant change in seabed elevation and no significant net sediment transport between the nearshore and offshore. To obtain this point it is necessary to compare profile surveys at a given period of time, and evaluate them to find the point in the profile where the depth variation is equal to, or less than, a pre-selected criteria. In order to manage all this information, a software application has been developed. On providing the input of the beach profiles, this tool offers the possibility of selecting the dates of the desired period of study, graph the profiles and then obtain, for each XY coordinate, all the required parameters, such as offshore distance, maximum, average and minimum depth, standard deviation and area difference between profiles. By evaluating each point along the profile, the DoC can be obtained at that point that meets the criteria. Moreover, this tool allows to graph not only the initial and final profile of the period, but all the beach profiles recorded, creating its maximum and minimum envelope. In addition, if the user introduces the parameters related to the equilibrium beach profile, this tool also corrects the area difference, taking into account the morphological changes (erosion– accretion) that may have occurred during the period studied. In conclusion, this tool has a friendly interface for obtaining the DoC with accuracy by interactive selection of the period of study. It also stores all the information and exports it to different formats.

Abstract

Full Text|PDF|XML

Using oil-spill booms as floating barriers must respect environmental conditions, mechanical limitations and operational constraints. Numerical modelling of boom behaviour can be used in order to prepare or validate booming plans, which respect these constraints. We present simulations of boom behaviour during an exercise in Galicia to support existing contingency plans. The main inputs of the modelled simulations are: environmental data on meteorology and oceanography, pollution field data and technical specifications of commercially available booms. The barrier structural analysis uses four-step modelling with an adaptive geometry. Modelled results are used in two ways. Firstly, a pre-paredness approach is conducted with a three-section boom plan to protect a mussel farm near the Puebla del Caramiñal. Secondly, a post-experiment analysis is made with a four-section plan and time- dependant boundary conditions given by the five GPS buoys position records carried out during the experiment. This numerical validation of the boom plan is complementary to the operational training of the boom deployment. The model results reproduce the barriers’ behaviour during the exercise and improve contingency planning for future response. The proposed approach has been generalized to other environments such as estuaries, ports and lakes.

Abstract

Full Text|PDF|XML

In the 1960s, coinciding with the massive demand for credit cards, financial companies needed a method to know their exposure to risk insolvency. It began applying credit-scoring techniques. In the 1980s credit-scoring techniques were extended to loans due to the increased demand for credit and computational progress. In 2004, new recommendations of the Basel Committee (as called Basel II) on banking supervision appeared. With the ensuing global financial crisis, a new document, Basel III, appeared. It introduced more demanding changes on the control of borrowed capital.

Nowadays, one of the main problems not addressed is the presence of large datasets. This research is focused on calculating probabilities of default in home equity loans, and measuring the computational efficiency of some statistical and data mining methods. In order to do these, some Monte Carlo experiments with known techniques and algorithms have been developed.

These computational experiments reveal that large datasets need BigData techniques and algorithms that yield faster and unbiased estimators.

- no more data -