Home About us Contact | |||
Parallel Processing (parallel + processing)
Selected AbstractsTime-Adaptive Lines for the Interactive Visualization of Unsteady Flow Data SetsCOMPUTER GRAPHICS FORUM, Issue 8 2009N. Cuntz I.3.3 [Computer Graphics]: Line and Curve Generation; I.3.1 [Computer Graphics]: Parallel Processing Abstract The quest for the ideal flow visualization reveals two major challenges: interactivity and accuracy. Interactivity stands for explorative capabilities and real-time control. Accuracy is a prerequisite for every professional visualization in order to provide a reliable base for analysis of a data set. Geometric flow visualization has a long tradition and comes in very different flavors. Among these, stream, path and streak lines are known to be very useful for both 2D and 3D flows. Despite their importance in practice, appropriate algorithms suited for contemporary hardware are rare. In particular, the adaptive construction of the different line types is not sufficiently studied. This study provides a profound representation and discussion of stream, path and streak lines. Two algorithms are proposed for efficiently and accurately generating these lines using modern graphics hardware. Each includes a scheme for adaptive time-stepping. The adaptivity for stream and path lines is achieved through a new processing idea we call ,selective transform feedback'. The adaptivity for streak lines combines adaptive time-stepping and a geometric refinement of the curve itself. Our visualization is applied, among others, to a data set representing a simulated typhoon. The storage as a set of 3D textures requires special attention. Both algorithms explicitly support this storage, as well as the use of precomputed adaptivity information. [source] Advanced Analysis of Steel Frames Using Parallel Processing and VectorizationCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2001C. M. Foley Advanced methods of analysis have shown promise in providing economical building structures through accurate evaluation of inelastic structural response. One method of advanced analysis is the plastic zone (distributed plasticity) method. Plastic zone analysis often has been deemed impractical due to computational expense. The purpose of this article is to illustrate applications of plastic zone analysis on large steel frames using advanced computational methods. To this end, a plastic zone analysis algorithm capable of using parallel processing and vector computation is discussed. Applicable measures for evaluating program speedup and efficiency on a Cray Y-MP C90 multiprocessor supercomputer are described. Program performance (speedup and efficiency) for parallel and vector processing is evaluated. Nonlinear response including postcritical branches of three large-scale fully restrained and partially restrained steel frameworks is computed using the proposed method. The results of the study indicate that advanced analysis of practical steel frames can be accomplished using plastic zone analysis methods and alternate computational strategies. [source] Modeling Network Latency and Parallel Processing in Distributed Database DesignDECISION SCIENCES, Issue 4 2003Jesper M. Johansson ABSTRACT The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs. [source] Parallel Processing: Design /PracticeARCHITECTURAL DESIGN, Issue 5 2006David Erdman Abstract In the late 1990s servo emerged as a young design collaborative embracing new forms of distributed practice as enabled by the advent of telecommunications technologies. In this section, David Erdman, Marcelyn Gow, Ulrika Karlsson and Chris Perry write about how these organisational principles are at work not only in the context of their practice, but in the design work itself, which stretches across a variety of design disciplines to incorporate areas of expertise particular to information and interaction design, as well as a number of manufacturing and fabrication technologies. Many of servo's projects have focused on small-scale interior infrastructures, typically in the form of gallery installations, furniture systems and exhibition designs. This particular scale has allowed the group to focus on the development of full-scale prototypes, exploring a wide range of potential innovations at the point of integration between various technological and material systems. Copyright © 2006 John Wiley & Sons, Ltd. [source] Parallel processing of remotely sensed hyperspectral imagery: full-pixel versus mixed-pixel classificationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2008Antonio J. Plaza Abstract The rapid development of space and computer technologies allows for the possibility to store huge amounts of remotely sensed image data, collected using airborne and satellite instruments. In particular, NASA is continuously gathering high-dimensional image data with Earth observing hyperspectral sensors such as the Jet Propulsion Laboratory's airborne visible,infrared imaging spectrometer (AVIRIS), which measures reflected radiation in hundreds of narrow spectral bands at different wavelength channels for the same area on the surface of the Earth. The development of fast techniques for transforming massive amounts of hyperspectral data into scientific understanding is critical for space-based Earth science and planetary exploration. Despite the growing interest in hyperspectral imaging research, only a few efforts have been devoted to the design of parallel implementations in the literature, and detailed comparisons of standardized parallel hyperspectral algorithms are currently unavailable. This paper compares several existing and new parallel processing techniques for pure and mixed-pixel classification in hyperspectral imagery. The distinction of pure versus mixed-pixel analysis is linked to the considered application domain, and results from the very rich spectral information available from hyperspectral instruments. In some cases, such information allows image analysts to overcome the constraints imposed by limited spatial resolution. In most cases, however, the spectral bands collected by hyperspectral instruments have high statistical correlation, and efficient parallel techniques are required to reduce the dimensionality of the data while retaining the spectral information that allows for the separation of the classes. In order to address this issue, this paper also develops a new parallel feature extraction algorithm that integrates the spatial and spectral information. The proposed technique is evaluated (from the viewpoint of both classification accuracy and parallel performance) and compared with other parallel techniques for dimensionality reduction and classification in the context of three representative application case studies: urban characterization, land-cover classification in agriculture, and mapping of geological features, using AVIRIS data sets with detailed ground-truth. Parallel performance is assessed using Thunderhead, a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center. The detailed cross-validation of parallel algorithms conducted in this work may specifically help image analysts in selection of parallel algorithms for specific applications. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modeling Network Latency and Parallel Processing in Distributed Database DesignDECISION SCIENCES, Issue 4 2003Jesper M. Johansson ABSTRACT The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs. [source] Advanced Analysis of Steel Frames Using Parallel Processing and VectorizationCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2001C. M. Foley Advanced methods of analysis have shown promise in providing economical building structures through accurate evaluation of inelastic structural response. One method of advanced analysis is the plastic zone (distributed plasticity) method. Plastic zone analysis often has been deemed impractical due to computational expense. The purpose of this article is to illustrate applications of plastic zone analysis on large steel frames using advanced computational methods. To this end, a plastic zone analysis algorithm capable of using parallel processing and vector computation is discussed. Applicable measures for evaluating program speedup and efficiency on a Cray Y-MP C90 multiprocessor supercomputer are described. Program performance (speedup and efficiency) for parallel and vector processing is evaluated. Nonlinear response including postcritical branches of three large-scale fully restrained and partially restrained steel frameworks is computed using the proposed method. The results of the study indicate that advanced analysis of practical steel frames can be accomplished using plastic zone analysis methods and alternate computational strategies. [source] An overlapping task assignment scheme for hierarchical coarse-grain task parallel processingCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2006Akimasa YoshidaArticle first published online: 12 JAN 200 Abstract This paper proposes an overlapping task assignment scheme for the hierarchical coarse-grain task parallel processing on multiprocessor systems. In coarse-grain task parallel processing, the compiler extracts parallelism among coarse-grain tasks automatically and the coarse-grain tasks are assigned to processor clusters at runtime. However, several programs may decrease the processor-cluster utilization factor owing to lack of parallelism inside each coarse-grain task. Therefore, in order to improve the processor-cluster utilization factor, this paper proposes the execution scheme with overlapping task assignment whose dynamic scheduler can assign several coarse-grain tasks to a processor cluster simultaneously. Also, the performance evaluations by simulations and executions on SMP showed that the proposed scheme could reduce the execution time remarkably. Copyright © 2006 John Wiley & Sons, Ltd. [source] Sequence alignment on the Cray MTA-2,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2004Shahid H. Bokhari Abstract Several variants of standard algorithms for DNA sequence alignment have been implemented on the Cray Multithreaded Architecture-2 (MTA-2). We describe the architecture of the MTA-2 and discuss how its hardware and software enable efficient implementation of parallel algorithms with little or no regard for issues of partitioning, mapping or scheduling. We describe how we ported variants of the naive algorithm for exact alignment and the dynamic programming algorithm for approximate alignment to the MTA-2 and provide detailed performance measurements. It is shown that, for the dynamic programming algorithm, the use of the MTA's ,Full/Empty' synchronization bits leads to almost perfect speedup for large problems on one to eight processors. These results illustrate the versatility of the MTA's architecture and demonstrate its potential for providing a high-productivity platform for parallel processing. Copyright © 2004 John Wiley & Sons, Ltd. [source] Modeling Network Latency and Parallel Processing in Distributed Database DesignDECISION SCIENCES, Issue 4 2003Jesper M. Johansson ABSTRACT The design of responsive distributed database systems is a key concern for information systems managers. In high bandwidth networks latency and local processing are the most significant factors in query and update response time. Parallel processing can be used to minimize their effects, particularly if it is considered at design time. It is the judicious replication and placement of data within a network that enable parallelism to be effectively used. However, latency and parallel processing have largely been ignored in previous distributed database design approaches. We present a comprehensive approach to distributed database design that develops efficient combinations of data allocation and query processing strategies that take full advantage of parallelism. We use a genetic algorithm to enable the simultaneous optimization of data allocation and query processing strategies. We demonstrate that ignoring the effects of latency and parallelism at design time can result in the selection of unresponsive distributed database designs. [source] High-frequency gamma oscillations coexist with low-frequency gamma oscillations in the rat visual cortex in vitroEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 8 2010Olaleke O. Oke Abstract Synchronization of neuronal activity in the visual cortex at low (30,70 Hz) and high gamma band frequencies (> 70 Hz) has been associated with distinct visual processes, but mechanisms underlying high-frequency gamma oscillations remain unknown. In rat visual cortex slices, kainate and carbachol induce high-frequency gamma oscillations (fast-,; peak frequency , 80 Hz at 37°C) that can coexist with low-frequency gamma oscillations (slow-,; peak frequency , 50 Hz at 37°C) in the same column. Current-source density analysis showed that fast-, was associated with rhythmic current sink-source sequences in layer III and slow-, with rhythmic current sink-source sequences in layer V. Fast-, and slow-, were not phase-locked. Slow-, power fluctuations were unrelated to fast-, power fluctuations, but were modulated by the phase of theta (3,8 Hz) oscillations generated in the deep layers. Fast-, was spatially less coherent than slow-,. Fast-, and slow-, were dependent on ,-aminobutyric acid (GABA)A receptors, ,-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors and gap-junctions, their frequencies were reduced by thiopental and were weakly dependent on cycle amplitude. Fast-, and slow-, power were differentially modulated by thiopental and adenosine A1 receptor blockade, and their frequencies were differentially modulated by N -methyl- d -aspartate (NMDA) receptors, GluK1 subunit-containing receptors and persistent sodium currents. Our data indicate that fast-, and slow-, both depend on and are paced by recurrent inhibition, but have distinct pharmacological modulation profiles. The independent co-existence of fast-, and slow-, allows parallel processing of distinct aspects of vision and visual perception. The visual cortex slice provides a novel in vitro model to study cortical high-frequency gamma oscillations. [source] Energy Group optimization for forward and inverse problems in nuclear engineering: application to downwell-logging problemsGEOPHYSICAL PROSPECTING, Issue 2 2006Elsa Aristodemou ABSTRACT Simulating radiation transport of neutral particles (neutrons and ,-ray photons) within subsurface formations has been an area of research in the nuclear well-logging community since the 1960s, with many researchers exploiting existing computational tools already available within the nuclear reactor community. Deterministic codes became a popular tool, with the radiation transport equation being solved using a discretization of phase-space of the problem (energy, angle, space and time). The energy discretization in such codes is based on the multigroup approximation, or equivalently the discrete finite-difference energy approximation. One of the uncertainties, therefore, of simulating radiation transport problems, has become the multigroup energy structure. The nuclear reactor community has tackled the problem by optimizing existing nuclear cross-sectional libraries using a variety of group-collapsing codes, whilst the nuclear well-logging community has relied, until now, on libraries used in the nuclear reactor community. However, although the utilization of such libraries has been extremely useful in the past, it has also become clear that a larger number of energy groups were available than was necessary for the well-logging problems. It was obvious, therefore, that a multigroup energy structure specific to the needs of the nuclear well-logging community needed to be established. This would have the benefit of reducing computational time (the ultimate aim of this work) for both the stochastic and deterministic calculations since computational time increases with the number of energy groups. We, therefore, present in this study two methodologies that enable the optimization of any multigroup neutron,, energy structure. Although we test our theoretical approaches on nuclear well-logging synthetic data, the methodologies can be applied to other radiation transport problems that use the multigroup energy approximation. The first approach considers the effect of collapsing the neutron groups by solving the forward transport problem directly using the deterministic code EVENT, and obtaining neutron and ,-ray fluxes deterministically for the different group-collapsing options. The best collapsing option is chosen as the one which minimizes the effect on the ,-ray spectrum. During this methodology, parallel processing is implemented to reduce computational times. The second approach uses the uncollapsed output from neural network simulations in order to estimate the new, collapsed fluxes for the different collapsing cases. Subsequently, an inversion technique is used which calculates the properties of the subsurface, based on the collapsed fluxes. The best collapsing option is chosen as the one that predicts the subsurface properties with a minimal error. The fundamental difference between the two methodologies relates to their effect on the generated ,-rays. The first methodology takes the generation of ,-rays fully into account by solving the transport equation directly. The second methodology assumes that the reduction of the neutron groups has no effect on the ,-ray fluxes. It does, however, utilize an inversion scheme to predict the subsurface properties reliably, and it looks at the effect of collapsing the neutron groups on these predictions. Although the second procedure is favoured because of (a) the speed with which a solution can be obtained and (b) the application of an inversion scheme, its results need to be validated against a physically more stringent methodology. A comparison of the two methodologies is therefore given. [source] A visual incompressible magneto-hydrodynamics solver with radiation, mass, and heat transferINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2009Necdet AslanArticle first published online: 8 JAN 200 Abstract A visual two-dimensional (2D) nonlinear magneto-hydrodynamics (MHD) code that is able to solve steady state or transient charged or neutral convection problems under the radiation, mass, and heat transfer effects is presented. The flows considered are incompressible and the divergence conditions on the velocity and magnetic fields are handled by similar relaxation schemes in the form of pseudo-iterations between the real time levels. The numerical method utilizes a matrix distribution scheme that runs on structured or unstructured triangular meshes. The time-dependent algorithm developed here utilizes a semi-implicit dual time stepping technique with multistage Runge-Kutta (RK) algorithm. It is possible for the user to choose different normalizations (natural, forced, Boussinesq, Prandtl, double-diffusive and radiation convection) automatically. The code is visual and runs interactively with the user. The graphics algorithms work multithreaded and allow the user to follow certain flow features (color graphs, vector graphs, one-dimensional profiles) during runs, see (Comput. Fluids 2007; 36:961,973) for details. With the code presented here nonlinear steady or time-dependent evolution of heated and stratified neutral and charged liquids, convection of mixture of neutral and charged gases, double-diffusive and salinity natural convection flows with internal heat generation/absorption and radiative heat transfer flows can be investigated. In addition, the numerical method (combining concentration, radiation, heat transfer, and MHD effects) takes the advantage of local time stepping and employs simplified residual jacobian matrix to increase pseudo-convergence rate. This code is currently being improved to simulate three-dimensional problems with parallel processing. Copyright © 2009 John Wiley & Sons, Ltd. [source] ACE4k: An analog I/O 64×64 visual microprocessor chip with 7-bit analog accuracyINTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 2-3 2002G. Liñán Abstract This paper describes a full-custom mixed-signal chip which embeds distributed optical signal acquisition, digitally-programmable analog parallel processing, and distributed image memory cache on a common silicon substrate. This chip, designed in a 0.5 µm standard CMOS technology contains around 1.000.000 transistors, of which operate in analog mode; it is hence one the most complex mixed-signal chip reported to now. Chip functional features are: local interactions, spatial-invariant array architecture; programmable local interactions among cells; randomly-selectable memory of instructions (elementary instructions are defined by specific values of the cell local interactions); random storage/retrieval of intermediate images; capability to complete algorithmic image processing tasks controlled by the user-selected stored instructions and interacting with the cache memory, etc. Thus, as illustrated in this paper, the chip is capable to complete complex spatio-temporal image processing tasks within short computation time (<300 ns for linear convolutions) and using a low power budget (<1.2 W for the complete chip). The internal circuitry of the chip has been designed to operate in robust manner with >7-bits equivalent accuracy in the internal analog operations, which has been confirmed by experimental measurements. Such 7-bits accuracy is enough for most image processing applications. ACE4k has been demonstrated capable to implement up to 30 template,-either directly or through template decomposition. This means the 100% of the 3×3 linear templates reported in Roska et al. 1998, [1]. Copyright © 2002 John Wiley & Sons, Ltd. [source] A batch-type time-true ATM-network simulator,design for parallel processingINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2002Michael Logothetis Abstract This paper presents a new type of network simulator for simulating the call-level operations of telecom networks and especially ATM networks. The simulator is a pure time-true type as opposed to a call-by-call type simulator. It is also characterized as a batch-type simulator. The entire simulation duration is divided into short time intervals of equal duration, t. During t, a batch processing of call origination or termination events is executed and the time-points of these events are sorted. The number of sorting executions is drastically reduced compared to a call-by-call simulator, resulting in considerable timesaving. The proposed data structures of the simulator can be implemented by a general-purpose programming language and are well fitted to parallel processing techniques for implementation on parallel computers, for further savings of execution time. We have first implemented the simulator in a sequential computer and then we have applied parallelization techniques to achieve its implementation on a parallel computer. In order to simplify the parallelization procedure, we dissociate the core simulation from the built-in call-level functions (e.g. bandwidth control or dynamic routing) of the network. The key point for a parallel implementation is to organize data by virtual paths (VPs) and distribute them among processors, which all execute the same set of instructions on this data. The performance of the proposed batch-type, time-true, ATM-network simulator is compared with that of a call-by-call simulator to reveal its superiority in terms of sequential execution time (when both simulators run on conventional computers). Finally, a measure of the accuracy of the simulation results is given. Copyright © 2002 John Wiley & Sons, Ltd. [source] Clustering with artificial neural networks and traditional techniquesINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2003G. Tambouratzis In this article, two clustering techniques based on neural networks are introduced. The two neural network models are the Harmony theory network (HTN) and the self-organizing logic neural network (SOLNN), both of which are characterized by parallel processing, a distributed architecture, and a large number of nodes. After describing their clustering characteristics and potential, a comparison to classical statistical techniques is performed. This comparison allows the creation of a correspondence between each neural network clustering technique and particular metrics as used by the corresponding statistical methods, which reflect the affinity of the clustered patterns. In particular, the HTN is found to perform the clustering task with an accuracy similar to the best statistical methods, while it is further capable of proposing an optimal number of groups into which the patterns may be clustered. On the other hand, the SOLNN combines a high clustering accuracy with the ability to cluster higher-dimensional patterns without a considerable increase in the processing time. © 2003 Wiley Periodicals, Inc. [source] A proactive management algorithm for self-healing mobile ad hoc networksINTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 3 2008Adel F. Iskander The ability to proactively manage mobile ad hoc networks (MANETs) is critical for supporting complex services such as quality of service, security and access control in these networks. This paper focuses on the problem of managing highly dynamic and resource-constrained MANET environments through the proposal of a novel proactive management algorithm (PMA) for self-healing MANETs. PMA is based on an effective integration of autonomous, predictive and adaptive distributed management strategies. Proactive management is achieved through the distributed analysis of the current performance of the mobile nodes utilizing an optimistic discrete event simulation method, which is used to predict the mobile nodes' future status, and execution a proactive fault-tolerant management scheme. PMA takes advantage of distributed parallel processing, flexibility and intelligence of active packets to minimize the management overhead, while adapting to the highly dynamic and resource-constrained nature of MANETs. The performance of the proposed architecture is validated through analytical performance analysis and comparative simulation with the Active Virtual Network Management Protocol. The simulation results demonstrate that PMA not only significantly reduces management control overhead, but also improves both the performance and the stability of MANETs. Copyright © 2007 John Wiley & Sons, Ltd. [source] Multilevel fast multipole algorithm enhanced by GPU parallel technique for electromagnetic scattering problemsMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 3 2010Kan Xu Abstract Along with the development of graphics processing Units (GPUS) in floating point operations and programmability, GPU has increasingly become an attractive alternative to the central processing unit (CPU) for some of compute-intensive and parallel tasks. In this article, the multilevel fast multipole algorithm (MLFMA) combined with graphics hardware acceleration technique is applied to analyze electromagnetic scattering from complex target. Although it is possible to perform scattering simulation of electrically large targets on a personal computer (PC) through the MLFMA, a large CPU time is required for the execution of aggregation, translation, and deaggregation operations. Thus GPU computing technique is used for the parallel processing of MLFMA and a significant speedup of matrix vector product (MVP) can be observed. Following the programming model of compute unified device architecture (CUDA), several kernel functions characterized by the single instruction multiple data (SIMD) mode are abstracted from components of the MLFMA and executed by multiple processors of the GPU. Numerical results demonstrate the efficiency of GPU accelerating technique for the MLFMA. © 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52: 502,507, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24963 [source] The role of bioinformatics in two-dimensional gel electrophoresisPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 8 2003Andrew W. Dowsey Abstract Over the last two decades, two-dimensional electrophoresis (2-DE) gel has established itself as the de facto approach to separating proteins from cell and tissue samples. Due to the sheer volume of data and its experimental geometric and expression uncertainties, quantitative analysis of these data with image processing and modelling has become an actively pursued research topic. The results of these analyses include accurate protein quantification, isoelectric point and relative molecular mass estimation, and the detection of differential expression between samples run on different gels. Systematic errors such as current leakage and regional expression inhomogeneities are corrected for, followed by each protein spot in the gel being segmented and modelled for quantification. To assess differential expression of protein spots in different samples run on a series of two-dimensional gels, a number of image registration techniques for correcting geometric distortion have been proposed. This paper provides a comprehensive review of the computation techniques used in the analysis of 2-DE gels, together with a discussion of current and future trends in large scale analysis. We examine the pitfalls of existing techniques and highlight some of the key areas that need to be developed in the coming years, especially those related to statistical approaches based on multiple gel runs and image mining techniques through the use of parallel processing based on cluster computing and the grid technology. [source] A parallel and distributed-processing model of joint attention, social cognition and autismAUTISM RESEARCH, Issue 1 2009Peter Mundy Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is central to research on this disorder. Joint attention may be best defined in terms of an information-processing system that begins to develop by 4,6 months of age. This system integrates the parallel processing of internal information about one's own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity, guides the development of functional connectivity in this network. With practice in infancy the joint processing of self,other attention becomes automatically engaged as an executive function. It can be argued that this executive joint attention is fundamental to human learning as well as the development of symbolic thought, social cognition and social competence throughout the life span. One advantage of this parallel and distributed-processing model of joint attention is that it directly connects theory on social pathology to a range of phenomena in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression and atypical ocular motor control. [source] |