Home About us Contact | |||
Practical Problems (practical + problem)
Selected AbstractsA missing values imputation method for time series data: an efficient method to investigate the health effects of sulphur dioxide levelsENVIRONMETRICS, Issue 2 2010Swarna Weerasinghe Abstract Environmental data contains lengthy records of sequential missing values. Practical problem arose in the analysis of adverse health effects of sulphur dioxide (SO2) levels and asthma hospital admissions for Sydney, Nova Scotia, Canada. Reliable missing values imputation techniques are required to obtain valid estimates of the associations with sparse health outcomes such as asthma hospital admissions. In this paper, a new method that incorporates prediction errors to impute missing values is described using mean daily average sulphur dioxide levels following a stationary time series with a random error. Existing imputation methods failed to incorporate the prediction errors. An optimal method is developed by extending a between forecast method to include prediction errors. Validity and efficacy are demonstrated comparing the performances with the values that do not include prediction errors. The performances of the optimal method are demonstrated by increased validity and accuracy of the , coefficient of the Poisson regression model for the association with asthma hospital admissions. Visual inspection of the imputed values of sulphur dioxide levels with prediction errors demonstrated that the variation is better captured. The method is computationally simple and can be incorporated into the existing statistical software. Copyright © 2009 John Wiley & Sons, Ltd. [source] Concordance with community mental health appointments: service users' reasons for discontinuationJOURNAL OF CLINICAL NURSING, Issue 7 2004Tony Hostick MSc Background., Quality issues are being given renewed emphasis through clinical governance and a drive to ensure service users' views underpin health service development. Aims., To establish service users' reasons for discontinuation of community based mental health appointments in one National Health Service Trust. Method., A two-phase survey of all non-completers over a year. Phase one using a structured postal questionnaire. Phase two using structured interviews with respondents to phase one by post, telephone and face to face. Results., A total of 243 discharges because of non-completion were identified by local services over the 12 months of the study and followed up by initial questionnaire. This represents 8.19% of all discharges (2967) within the same period. Forty-four users were engaged and followed up within phase two of the survey. Data were subject to both quantitative and qualitative analysis. Conclusions., Analysis of responses suggests that the main reasons for non-completion are because of dissatisfaction although the reasons are varied and the interplay between variables is complex. Whilst this user group are not apparently suffering from ,severe mental illness', there is clear, expressed need for a service. Relevance to clinical practice., Whoever provides such a service should be responsive to expressed need and a non-medical approach seems to be favoured. If these needs are appropriately met then users are more likely to be engaged and satisfaction is likely to be improved. Although this in itself does not necessarily mean improved clinical outcomes, users are more likely to stay in touch until an agreed discharge. Practical problems of applied health service research are discussed and recommendations are made for a review of referral systems, service delivery and organization with suggestions for further research. [source] He's homotopy perturbation method for two-dimensional heat conduction equation: Comparison with finite element methodHEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 4 2010M. Jalaal Abstract Heat conduction appears in almost all natural and industrial processes. In the current study, a two-dimensional heat conduction equation with different complex Dirichlet boundary conditions has been studied. An analytical solution for the temperature distribution and gradient is derived using the homotopy perturbation method (HPM). Unlike most of previous studies in the field of analytical solution with homotopy-based methods which investigate the ODEs, we focus on the partial differential equation (PDE). Employing the Taylor series, the gained series has been converted to an exact expression describing the temperature distribution in the computational domain. Problems were also solved numerically employing the finite element method (FEM). Analytical and numerical results were compared with each other and excellent agreement was obtained. The present investigation shows the effectiveness of the HPM for the solution of PDEs and represents an exact solution for a practical problem. The mathematical procedure proves that the present mathematical method is much simpler than other analytical techniques due to using a combination of homotopy analysis and classic perturbation method. The current mathematical solution can be used in further analytical and numerical surveys as well as related natural and industrial applications even with complex boundary conditions as a simple accurate technique. © 2010 Wiley Periodicals, Inc. Heat Trans Asian Res; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.20292 [source] Dynamic stiffness for piecewise non-uniform Timoshenko column by power series,part I: Conservative axial forceINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2001A. Y. T. Leung Abstract The dynamic stiffness method uses the solutions of the governing equations as shape functions in a harmonic vibration analysis. One element can predict many modes exactly in the classical sense. The disadvantages lie in the transcendental nature and in the need to solve a non-linear eigenproblem for the natural modes, which can be solved by the Wittrick,William algorithm and the Leung theorem. Another practical problem is to solve the governing equations exactly for the shape functions, non-uniform members in particular. It is proposed to use power series for the purpose. Dynamic stiffness matrices for non-uniform Timoshenko column are taken as examples. The shape functions can be found easily by symbolic programming. Step beam structures can be treated without difficulty. The new contributions of the paper include a general formulation, an extended Leung's theorem and its application to parametric study. Copyright © 2001 John Wiley & Sons, Ltd. [source] Backbone Diversity Analysis in Catalyst DesignADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 3 2009Abstract We present a computer-based heuristic framework for designing libraries of homogeneous catalysts. In this approach, a set of given bidentate ligand-metal complexes is disassembled into key substructures ("building blocks"). These include metal atoms, ligating groups, backbone groups, and residue groups. The computer then rearranges these building blocks into a new library of virtual catalysts. We then tackle the practical problem of choosing a diverse subset of catalysts from this library for actual synthesis and testing. This is not trivial, since ,catalyst diversity' itself is a vague concept. Thus, we first define and quantify this diversity as the difference between key structural parameters (descriptors) of the catalysts, for the specific reaction at hand. Subsequently, we propose a method for choosing diverse sets of catalysts based on catalyst backbone selection, using weighted D-optimal design. The computer selects catalysts with different backbones, where the difference is measured as a distance in the descriptors space. We show that choosing such a D-optimal subset of backbones gives more diversity than a simple random sampling. The results are demonstrated experimentally in the nickel-catalysed hydrocyanation of 3-pentenenitrile to adiponitrile. Finally, the connection between backbone diversity and catalyst diversity, and the implications towards in silico catalysis design are discussed. [source] Radiochemical stability of 14C-compounds on storage: benefits of thioethersJOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 3 2003Andreas Fredenhagen Abstract Storage of radiochemicals is a significant practical problem. Storage as a solution in various solvents was compared to the storage as a neat oil or solid over an extended period of time. Dichloromethane, a solvent previously not recommended for storage, was found to be a good choice in certain solvent mixtures. Addition of methylsulfide or 2-methyl-2-butene was shown to reduce the radiochemical decomposition by a factor of 1.7,3.2 in ethanol-free solvents. General points to consider for storage of radiochemicals are discussed. Radiochemical purity was determined by HPLC. Copyright © 2002 John Wiley & Sons, Ltd. [source] Optimal bimodal pore networks for heterogeneous catalysisAICHE JOURNAL, Issue 4 2004Stefan Gheorghiu Abstract A practical problem in the rational design of a heterogeneous catalyst is to optimize its structure at all scales. By optimizing the large-pore network of a bimodal porous catalyst with a given nanoporosity (for example, zeolite or mesoporous catalyst) for the yield of diffusion-limited first-order reactions, it is found that catalysts typically benefit from a hierarchical pore network with a broad pore-size distribution. When comparing the performance of the optimal structures to that of self-similar, fractal-like pore hierarchies, it is found that the latter can be made to have the same effectiveness factor as the optimal ones, suggesting that fractal-like catalysts operate very near optimality, even if their structure is considerably different from that of the true optima. This is useful, because fractal-like structures have the advantage of being organized in a modular, natural way, potentially easy to reproduce by templating. © 2004 American Institute of Chemical Engineers AIChE J, 50: 812,820, 2004 [source] Novel methodology for the archiving and interactive reading of clinical magnetic resonance spectroscopic imagingMAGNETIC RESONANCE IN MEDICINE, Issue 3 2002Jeffry R. Alger Abstract Archiving clinical magnetic resonance spectroscopic imaging (MRSI) data and presenting the data to specialists (e.g., neuroradiologists, neurosurgeons, neurologists, neuro-oncologists, and MR scientists) who work in different physical locations is a practical problem of significance. This communication describes a novel solution. The study hypothesis was that it is possible to use widely available distributed computing techniques to create a clinical MRSI user interface addressable from any personal computer with a suitable network connection. A worldwide web MRSI archive and interface system was created that permits the user to interactively view individual MRSI voxel spectra with correlation to MR images and to parametric spectroscopic images. Web browser software (i.e., Netscape and Internet Explorer) permits users in various physical locations to access centrally archived MRSI data using a variety of operating systems and client workstations. The system was used for archiving and displaying more than 1000 clinical MRSI studies performed at the authors' institution. The system also permits MRSI data to be viewed via the Internet from distant locations worldwide. The study illustrates that widely available software operating within highly distributed electronic networks can be used for archiving and interactive reading of large amounts of clinical MRSI data. Magn Reson Med 48:411,418, 2002. © 2002 Wiley-Liss, Inc. [source] Comparison of mechanical properties of epoxy composites reinforced with stitched glass and carbon fabrics: Characterization of mechanical anisotropy in composites and investigation on the interaction between fiber and epoxy matrixPOLYMER COMPOSITES, Issue 8 2008Volkan Çeçen The primary purpose of the study is to evaluate and compare the mechanical properties of epoxy-based composites having different fiber reinforcements. Glass and carbon fiber composite laminates were manufactured by vacuum infusion of epoxy resin into two commonly used noncrimp stitched fabric (NCF) types: unidirectional and biaxial fabrics. The effects of geometric variables on composite structural integrity and strength were illustrated. Hence, tensile and three-point bending flexural tests were conducted up to failure on specimens strengthened with different layouts of fibrous plies in NCF. In this article, an important practical problem in fibrous composites, interlaminar shear strength as measured in short beam shear test, is discussed. The fabric composites were tested in three directions: at 0°, 45°, and 90°. In addition to the extensive efforts in elucidating the variation in the mechanical properties of noncrimp glass and carbon fabric reinforced laminates, the work presented here focuses, also, on the type of interactions that are established between fiber and epoxy matrix. The experiments, in conjunction with scanning electron photomicrographs of fractured surfaces of composites, were interpreted in an attempt to explain the failure mechanisms in the composite laminates broken in tension. POLYM. COMPOS., 2008. © 2008 Society of Plastics Engineers [source] Building native protein conformation from NMR backbone chemical shifts using Monte Carlo fragment assemblyPROTEIN SCIENCE, Issue 8 2007Haipeng Gong Abstract We have been analyzing the extent to which protein secondary structure determines protein tertiary structure in simple protein folds. An earlier paper demonstrated that three-dimensional structure can be obtained successfully using only highly approximate backbone torsion angles for every residue. Here, the initial information is further diluted by introducing a realistic degree of experimental uncertainty into this process. In particular, we tackle the practical problem of determining three-dimensional structure solely from backbone chemical shifts, which can be measured directly by NMR and are known to be correlated with a protein's backbone torsion angles. Extending our previous algorithm to incorporate these experimentally determined data, clusters of structures compatible with the experimentally determined chemical shifts were generated by fragment assembly Monte Carlo. The cluster that corresponds to the native conformation was then identified based on four energy terms: steric clash, solvent-squeezing, hydrogen-bonding, and hydrophobic contact. Currently, the method has been applied successfully to five small proteins with simple topology. Although still under development, this approach offers promise for high-throughput NMR structure determination. [source] An autonomous adaptive scheduling agent for period searchingASTRONOMISCHE NACHRICHTEN, Issue 3 2008E.S. Saunders Abstract We describe the design and implementation of an autonomous adaptive software agent that addresses the practical problem of observing undersampled, periodic, time-varying phenomena using a network of HTN-compliant robotic telescopes. The algorithm governing the behaviour of the agent uses an optimal geometric sampling technique to cover the period range of interest, but additionally implements proactive behaviour that maximises the optimality of the dataset in the face of an uncertain and changing operating environment. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] VULNERABILITY IN RESEARCH AND HEALTH CARE; DESCRIBING THE ELEPHANT IN THE ROOM?BIOETHICS, Issue 4 2008SAMIA A. HURST ABSTRACT Despite broad agreement that the vulnerable have a claim to special protection, defining vulnerable persons or populations has proved more difficult than we would like. This is a theoretical as well as a practical problem, as it hinders both convincing justifications for this claim and the practical application of required protections. In this paper, I review consent-based, harm-based, and comprehensive definitions of vulnerability in healthcare and research with human subjects. Although current definitions are subject to critique, their underlying assumptions may be complementary. I propose that we should define vulnerability in research and healthcare as an identifiably increased likelihood of incurring additional or greater wrong. In order to identify the vulnerable, as well as the type of protection that they need, this definition requires that we start from the sorts of wrongs likely to occur and from identifiable increments in the likelihood, or to the likely degree, that these wrongs will occur. It is limited but appropriately so, as it only applies to special protection, not to any protection to which we have a valid claim. Using this definition would clarify that the normative force of claims for special protection does not rest with vulnerability itself, but with pre-existing claims when these are more likely to be denied. Such a clarification could help those who carry responsibility for the protection of vulnerable populations, such as Institutional Review Boards, to define the sort of protection required in a more targeted and effective manner. [source] A web-based tool for control engineering teachingCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2006J. Albino Méndez Abstract In this article a new tool for control engineering teaching is presented. The tool was implemented using Java applets and is freely accessible through Web. It allows the analysis and simulation of linear control systems and was created to complement the theoretical lectures in basic control engineering courses. The article is not only centered in the description of the tool but also in the methodology to use it and its evaluation in an electrical engineering degree. Two practical problems are included in the manuscript to illustrate the use of the main functions implemented. The developed web-based tool can be accessed through the link http://www.controlweb.cyc.ull.es. © 2006 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 178,187, 2006; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20080 [source] Growth hormone in short children: medically appropriate treatmentACTA PAEDIATRICA, Issue 1 2001R Macklin Bolt and Mul argue persuasively against the "disease" approach and the "client" approach in addressing the question of whether growth hormone for short children properly belongs in the medical realm. Their own preferred approach, the "suffering" approach, is superior to the others but has practical problems that would arise in its application. An additional ethical issue, not addressed by Bolt and Mul, relates to justice in providing access for children from families of limited financial means to growth hormone treatment. [source] Clock synchronization in Cell/B.E. tracesCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2009M. Biberstein Abstract Cell/B.E. is a heterogeneous multicore processor that was designed for the efficient execution of parallel and vectorizable applications with high computation and memory requirements. The transition to multicores introduces the challenge of providing tools that help programmers tune the code running on these architectures. Tracing tools, in particular, often help locate performance problems related to thread and process communication. A major impediment to implementing tracing on Cell is the absence of a common clock that can be accessed at low cost from all cores. The OS clock is costly to access from the auxiliary cores and the hardware timers cannot be simultaneously set on all the cores. In this paper, we describe an offline trace analysis algorithm that assigns wall-clock time to trace records based on their thread-local time stamps and event order. Our experiments on several Cell SDK workloads show that the indeterminism in assigning wall-clock time to events is low, on average 20,40 clock ticks (translating into 1.4,2.8,µs on the system used in our experiments). We also show how various practical problems, such as the imprecision of time measurement, can be overcome. Copyright © 2009 John Wiley & Sons, Ltd. [source] Adventures in multivalency, the Harry S. Fischer memorial lecture CMR 2005; Evian, FranceCONTRAST MEDIA & MOLECULAR IMAGING, Issue 1 2006Michael F. Tweedle Abstract This review discusses multivalency in the context of drug discovery, specifically the discovery of new diagnostic imaging and related agents. The aim is to draw attention to the powerful role that multivalency plays throughout research involving molecular biology, in general, and much of biochemically targeted contrast agent research, in particular. Two examples from the author's laboratory are described. We created small (,5,kDa) peptide ,dimers' composed of two different, chemically linked peptides. The monomer peptides both bound to the same target protein with Kd,,,100,s,nM, while the heterodimers had sub-nM Kd values. Biological activity was evident in the heterodimers where none or very little existed in homodimers, monomers or monomer mixtures. Two different tyrosine kinases (KDR and C-Met) and four peptide families produced consistent results: multivalent heterodimers were uniquely different. The second example begins with making two micron ultrasound bubbles coated with the peptide, TKPPR (a Tuftsin antagonist) as a negative control for bubbles targeted with angiogenesis target-binding peptides. Unexpected binding of a ,negative' control, (TKPPR)-targeted bubble to endothelial cells expressing angiogenesis targets, led to the surprising result that TKPPR, only when multimerized, binds avidly, specifically and actively to neuropilin-1, a VEGF co-receptor. VEGF is the primary stimulator of angiogenesis. Tuftsin is a small peptide (TKPR) derived from IgG that binds to macrophages during inflammation, and has been studied for over 30 years. The receptor has never been cloned. The results led to new conclusions about Tuftsin, neuropilin-1 and the purpose, up to now unknown, of exon 8 in VEGF. Multivalency can be used rationally to solve practical problems in drug discovery. When targeting larger structures, multivalency is frequently unavoidable, and can lead to unpredictable and useful biochemical information, as well as to new drug candidates. Copyright © 2006 John Wiley & Sons, Ltd. [source] Branch-and-Price Methods for Prescribing Profitable Upgrades of High-Technology Products with Stochastic Demands*DECISION SCIENCES, Issue 1 2004Purushothaman Damodaran ABSTRACT This paper develops a model that can be used as a decision support aid, helping manufacturers make profitable decisions in upgrading the features of a family of high-technology products over its life cycle. The model integrates various organizations in the enterprise: product design, marketing, manufacturing, production planning, and supply chain management. Customer demand is assumed random and this uncertainty is addressed using scenario analysis. A branch-and-price (B&P) solution approach is devised to optimize the stochastic problem effectively. Sets of random instances are generated to evaluate the effectiveness of our solution approach in comparison with that of commercial software on the basis of run time. Computational results indicate that our approach outperforms commercial software on all of our test problems and is capable of solving practical problems in reasonable run time. We present several examples to demonstrate how managers can use our models to answer "what if" questions. [source] A Strategic Approach to Multistakeholder NegotiationsDEVELOPMENT AND CHANGE, Issue 2 2001David Edmunds Environment and development practitioners increasingly are interested in identifying methods, institutional arrangements and policy environments that promote negotiations among natural resource stakeholders leading to collective action and, it is hoped, sustainable resource management. Yet the implications of negotiations for disadvantaged groups of people are seldom critically examined. We draw attention to such implications by examining different theoretical foundations for multistakeholder negotiations and linking these to practical problems for disadvantaged groups. We argue that negotiations based on an unhealthy combination of communicative rationality and liberal pluralism, which underplays or seeks to neutralize differences among stakeholders, poses considerable risks for disadvantaged groups. We suggest that negotiations influenced by radical pluralist and feminist post-structuralist thought, which emphasize strategic behaviour and selective alliance-building, promise better outcomes for disadvantaged groups in most cases, particularly on the scale and in the historical contexts in which negotiations over forest management usually take place. [source] TAXING LAND VALUE IS JUST ANOTHER QUESTIONABLE TAXECONOMIC AFFAIRS, Issue 4 2006Oliver Marc Hartwich There has recently been much public debate about the introduction of a land value tax. To its supporters such a tax promises to achieve several goals simultaneously. On closer inspection, however, the arguments in favour of land value taxation are not convincing. On the contrary, the economic foundations on which proponents of this tax rely are dubious, and there are significant legal, moral and practical problems with land value taxation. [source] SOME PROBLEMS WITH ASSESSING COPE'S RULEEVOLUTION, Issue 8 2008Andrew R. Solow Cope's Rule states that the size of species tends to increase along an evolutionary lineage. A basic statistical framework is elucidated for testing Cope's Rule and some surprising complications are pointed out. If Cope's Rule is formulated in terms of mean size, then it is not invariant to the way in which size is measured. If Cope's Rule is formulated in terms of median size, then it is not invariant to the degree of separation between ancestral and descendant species. Some practical problems in assessing Cope's Rule are also described. These results have implications for the empirical assessment of Cope's Rule. [source] Sequential methods and group sequential designs for comparative clinical trialsFUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 5 2003Véronique Sébille Abstract Comparative clinical trials are performed to assess whether a new treatment has superior efficacy than a placebo or a standard treatment (one-sided formulation) or whether two active treatments have different efficacies (two-sided formulation) in a given population. The reference approach is the single-stage design and the statistical test is performed after inclusion and evaluation of a predetermined sample size. In practice, the single-stage design is sometimes difficult to implement because of ethical concerns and/or economic reasons. Thus, specific early termination procedures have been developed to allow repeated statistical analyses to be performed on accumulating data and stop the trial as soon as the information is sufficient to conclude. Two main different approaches can be used. The first one is derived from strictly sequential methods and includes the sequential probability ratio test and the triangular test. The second one is derived from group sequential designs and includes Peto, Pocock, and O'Brien and Fleming methods, , and , spending functions, and one-parameter boundaries. We review all these methods and describe the bases on which they rely as well as their statistical properties. We also compare these methods and comment on their advantages and drawbacks. We present software packages which are available for the planning, monitoring and analysis of comparative clinical trials with these methods and discuss the practical problems encountered when using them. The latest versions of all these methods can offer substantial sample size reductions when compared with the single-stage design not only in the case of clear efficacy but also in the case of complete lack of efficacy of the new treatment. The software packages make their use quite simple. However, it has to be stressed that using these methods requires efficient logistics with real-time data monitoring and, apart from survival studies or long-term clinical trials with censored endpoints, is most appropriate when the endpoint is obtained quickly when compared with the recruitment rate. [source] Henry VII in Context: Problems and PossibilitiesHISTORY, Issue 307 2007STEVEN GUNN Clearer understanding of Henry VII's reign is hindered not only by practical problems, such as deficiencies in source material, but also by its liminal position in historical study, at the end of the period conventionally studied by later medievalists and the beginning of that studied by early modernists. This makes it harder to evaluate changes in the judicial system, in local power structures, in England's position in European politics, in the rise of new social groups to political prominence and in the ideas behind royal policy. However, thoughtful combination of the approaches taken by different historical schools and reflection on wider processes of change at work in Henry's reign, such as in England's cultural and economic life, can make a virtue out of Henry's liminality. Together with the use of more unusual sources, such an approach enables investigation for Henry's reign of many themes of current interest to historians of the later Tudor period. These include courtly, parliamentary and popular politics, political culture, state formation and the interrelationships of different parts of the British Isles and Ireland. [source] Irony, critique and ethnomethodology in the study of computer work: irreconcilable tensions?INFORMATION SYSTEMS JOURNAL, Issue 2 2008Teresa Marcon Abstract. To broaden discussion of critique in the field of information systems beyond current approaches, we look outside the core management discourse and examine the critical element in ethnomethodological research on computer-based work environments. Our examination reveals a form of critique that is above all without irony, seeking always to be respectful of the competence of research subjects, and informed by an in-depth understanding of participants' practices. We argue that ethnomethodology is an often unrecognised critical approach that attempts to speak from within a community of practice and deliver critical insights that are responsive to the kinds of practical problems of interest to practitioners. [source] Coupled HM analysis using zero-thickness interface elements with double nodes,Part II: Verification and applicationINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 18 2008J. M. Segura Abstract In a companion Part I of this paper (Int. J. Numer. Anal. Meth. Geomech. 2008; DOI: 10.1002/nag.735), a coupled hydro-mechanical (HM) formulation for geomaterials with discontinuities based on the finite element method (FEM) with double-node, zero-thickness interface elements was developed and presented. This Part II paper includes the numerical solution of basic practical problems using both the staggered and the fully coupled approaches. A first group of simulations, based on the classical consolidation problem with an added vertical discontinuity, is used to compare both the approaches in terms of accuracy and convergence. The monolithic or fully coupled scheme is also used in an application example studying the influence of a horizontal joint in the performance of a reservoir subject to fluid extraction. Results include a comparison with other numerical solutions from the literature and a sensitivity analysis of the mechanical parameters of the discontinuity. Some simulations are also run using both a full non-symmetric and a simplified symmetric Jacobian matrix. On top of verifying the model developed and its capability to reflect the conductivity changes of the interface with aperture changes, the results presented also lead to interesting observations of the numerical performance of the methods implemented. Copyright © 2008 John Wiley & Sons, Ltd. [source] Implementation of the finite element method in the three-dimensional discontinuous deformation analysis (3D-DDA)INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2008Roozbeh Grayeli Abstract A modified three-dimensional discontinuous deformation analysis (3D-DDA) method is derived using four-noded tetrahedral elements to improve the accuracy of current 3D-DDA algorithm in practical applications. The analysis program for the modified 3D-DDA method is developed in a C++ environment and its accuracy is illustrated through comparisons with several analytical solutions that are available for selected problems. The predicted solutions for these problems using the modified 3D-DDA approach all show satisfactory agreement with the corresponding analytical results. Results presented in this paper demonstrate that the modified 3D-DDA method with discontinuous modeling capabilities offers a useful computational tool to determine stresses and deformations in practical problems involving fissured elastic media with reasonable accuracy. Copyright © 2008 John Wiley & Sons, Ltd. [source] A non-coaxial constitutive model for sand deformation under rotation of principal stress axesINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2008Ali Lashkari Abstract A constitutive model for the simulation of non-coaxiality, an aspect of anisotropic behavior of sand subjected to the rotation of the principal stress axes, is presented in this paper. Experimental studies have shown that non-coaxiality or non-coincidence of principal plastic strain increments with principal stress axes under loadings involving the rotation of principal stress axes may be considerable. Besides, the rotation of the principal stress axes results in dramatic effects on stiffness and dilatant behavior of sand. Therefore, the consequences of principal stress axes rotation on deformational behavior, dilatancy and soil stiffness must be taken into account in theoretical and practical problems. To this aim, the following steps are taken: (1) A general relationship for flow direction with respect to possibility of non-coaxial flow is developed. Moreover, special circumstances linking non-coaxiality to instantaneous interaction between loading and soil fabric are proposed. (2) Proposing novel expressions for plastic modulus and dilatancy function, the model is enforced to provide realistic simulations when sand is subjected to the rotation of the principal stress axes. Finally, with numerous examples and comparisons, the model capabilities are shown under various stress paths and drainage conditions. Copyright © 2007 John Wiley & Sons, Ltd. [source] Analysis of shield tunnelINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2004W.Q. Ding Abstract This paper proposes a two-dimensional finite element model for the analysis of shield tunnels by taking into account the construction process which is divided into four stages. The soil is assumed to behave as an elasto-plastic medium whereas the shield is simulated by beam,joint discontinuous model in which curved beam elements and joint elements are used to model the segments and joints, respectively. As grout is usually injected to fill the gap between the lining and the soil, the property parameters of the grout are chosen in such a way that they can reflect the state of the grout at each stage. Furthermore, the contact condition between the soil and lining will change with the construction stage, and therefore, different stress-releasing coefficients are used to account for the changes. To assess the accuracy that can be attained by the method in solving practical problems, the shield tunnelling in the No. 7 Subway Line Project in Osaka, Japan, is used as a case history for our study. The numerical results are compared with those measured in the field. The results presented in the paper show that the proposed numerical procedure can be used to effectively estimate the deformation, stresses and moments experienced by the surrounding soils and the concrete lining segments. The analysis and method presented in this paper can be considered to be useful for other subway construction projects involving shield tunnelling in soft soils. Copyright © 2004 John Wiley & Sons, Ltd. [source] Experience in calibrating the double-hardening constitutive model MonotINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2003M. A. Hicks The Monot double-hardening soil model has previously been implemented within a general purpose finite element algorithm, and used in the analysis of numerous practical problems. This paper reviews experience gained in calibrating Monot to laboratory data and demonstrates how the calibration process may be simplified without detriment to the range of behaviours modelled. It describes Monot's principal features, important governing equations and various calibration methods, including strategies for overconsolidated, cemented and cohesive soils. Based on a critical review of over 30 previous Monot calibrations, for sands and other geomaterials, trends in parameter values have been identified, enabling parameters to be categorized according to their relative importance. It is shown that, for most practical purposes, a maximum of only 5 parameters is needed; for the remaining parameters, standard default values are suggested. Hence, the advanced stress,strain modelling offered by Monot is attainable with a similar number of parameters as would be needed for some simpler, less versatile, models. Copyright © 2003 John Wiley & Sons, Ltd. [source] A new stereo-analytical method for determination of removal blocks in discontinuous rock massesINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2003Zixin Zhang Abstract The paper provides a new stereo-analytical method, which is a combination of the stereographic method and analytical methods, to separate finite removable blocks from the infinite and tapered blocks in discontinuous rock masses. The methodology has applicability to both convex and concave blocks. Application of the methodology is illustrated through examples. Addition of this method to the existing block theory procedures available in the literature improves the capability of block theory in solving practical problems in rock engineering. Copyright © 2003 John Wiley & Sons, Ltd. [source] The modelling of multi-fracturing solids and particulate mediaINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2004D. R. J. Owen Abstract Computational strategies in the context of combined discrete/finite element methods for effective modelling of large-scale practical problems involving multiple fracture and discrete phenomena are reviewed in the present work. The issues considered include: (1) Fracture criteria and propagation mechanisms within both the finite and discrete elements, together with mesh adaptivity procedures for discretization and introduction of fracture systems; (2) Detection procedures for monitoring contact between large numbers of discrete elements; (3) Interaction laws governing the response of contact pairs; (4) Parallel implementation; (5) Other issues, such as element methodology for near incompressible behaviour and generation of random packing of discrete objects. The applicability of the methodology developed is illustrated through selected practical examples. Copyright © 2004 John Wiley & Sons, Ltd. [source] |