Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "parallel processing" wg kryterium: Temat


Tytuł:
Parallel computing in a network of workstations
Autorzy:
Ogrodowczyk, R.
Murawski, K.
Powiązania:
https://bibliotekanauki.pl/articles/1954092.pdf
Data publikacji:
2004
Wydawca:
Politechnika Gdańska
Tematy:
parallel computing
clusters
parallel-processing systems
Opis:
In this paper we describe a few architectures and software for parallel-processing computers. We have tested a cluster constructed with the use of MPI. All tests have been performed for one- and two-dimensional magneto-hydrodynamic plasma. We have concluded from the results of these tests that a simple problem should be run in a sequential node, as its execution time does not essentially decrease with the number of processors used. At the same time, the execution time of a complex problem decreases significantly with the number of processors. In the case of two-dimensional plasma the acceleration factor has reached the value of 3.7 with the use of 10 processors.
Źródło:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk; 2004, 8, 3; 327-332
1428-6394
Pojawia się w:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Parallel mesh generator for biomechanical purpose
Autorzy:
Hausa, H.
Nowak, M.
Powiązania:
https://bibliotekanauki.pl/articles/279979.pdf
Data publikacji:
2014
Wydawca:
Polskie Towarzystwo Mechaniki Teoretycznej i Stosowanej
Tematy:
parallel processing
finite element mesh generator
biomechanics
Opis:
The analysis of a biological structure with numerical methods based on engineering approach (i.e. Computational Solid Mechanics) is becoming more and more popular nowadays. The examination of complex, well reproduced biological structures (i.e. bone) is impossible to perform with a single workstation. The mesh for Finite Element Method (FEM) of the order of 106 is required for modeling a small piece of trabecular bone. The homogenization techniques could be used to solve this problem, but these methods require several assumptions and simplifications. Hence, effective analysis of a biological structure in a parallel environment is desirable. The software for structure simulation at cluster architecture are available; however, FEM generator is still inaccessible in that environment. The mesh generator for biological applications – Cosmoprojector – developed at Division of Virtual Engineering, Poznan University of Technology has been adapted for the parallel environment. The preliminary results of complex structure generation confirm the correctness of the proposed method. In this paper, the algorithm of computational mesh generation in a parallel environment has been presented. The proposed system has been tested at biological structure.
Źródło:
Journal of Theoretical and Applied Mechanics; 2014, 52, 1; 71-80
1429-2955
Pojawia się w:
Journal of Theoretical and Applied Mechanics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The Solution of SAT Problems Using Ternary Vectors and Parallel Processing
Autorzy:
Posthoff, C.
Steinbach, B.
Powiązania:
https://bibliotekanauki.pl/articles/226241.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
SAT solver
ternary vector
parallel processing
XBOOLE
Opis:
This paper will show a new approach to the solution of SAT-problems. It has been based on the isomorphism between the Boolean algebras of finite sets and the Boolean algebras of logic functions depending on a finite number of binary variables. Ternary vectors are the main data structure representing sets of Boolean vectors. The respective set operations (mainly the complement and the intersection) can be executed in a bit-parallel way (64 bits at present), but additionally also on different processors working in parallel. Even a hierarchy of processors, a small set of processor cores of a single CPU, and the huge number of cores of the GPU has been taken into consideration. There is no need for any search algorithms. The approach always finds all solutions of the problem without consideration of special cases (such us no solution, one solution, all solutions). It also allows to include problem-relevant knowledge into the problem-solving process at an early point of time. Very often it is possible to use ternary vectors directly for the modeling of a problem. Some examples are used to illustrate the efficiency of this approach (Sudoku, Queen's problems on the chessboard, node bases in graphs, graph-coloring problems, Hamiltonian and Eulerian paths etc.).
Źródło:
International Journal of Electronics and Telecommunications; 2011, 57, 3; 233-249
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Parallel Mutant Execution Techniques in Mutation Testing Process for Simulink Models
Autorzy:
Hanh, L. T. M.
Binh, N. T.
Tung, K. T.
Powiązania:
https://bibliotekanauki.pl/articles/307992.pdf
Data publikacji:
2017
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
mutant execution
mutation testing
parallel processing
software testing
Opis:
Mutation testing – a fault-based technique for software testing – is a computationally expensive approach. One of the powerful methods to improve the performance of mutation without reducing effectiveness is to employ parallel processing, where mutants and tests are executed in parallel. This approach reduces the total time needed to accomplish the mutation analysis. This paper proposes three strategies for parallel execution of mutants on multicore machines using the Parallel Computing Toolbox (PCT) with the Matlab Distributed Computing Server. It aims to demonstrate that the computationally intensive software testing schemes, such as mutation, can be facilitated by using parallel processing. The experiments were carried out on eight different Simulink models. The results represented the efficiency of the proposed approaches in terms of execution time during the testing process.
Źródło:
Journal of Telecommunications and Information Technology; 2017, 4; 90-100
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Intelligence and parallel versus sequential organization of information processing in analogical reasoning
Autorzy:
Orzechowski, Jarosław
Nęcka, Edward
Powiązania:
https://bibliotekanauki.pl/articles/419420.pdf
Data publikacji:
2011-11-01
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
intelligence
analogical reasoning
parallel processing
sequential processing
attention
working memory
Opis:
The construct of the organization of information processing (OIP) has been adopted as a possible cognitive mechanism responsible for human intelligent functioning. Participants (N = 77) were asked to solve an analogical reasoning task, a test of divided attention, a working memory capacity test, and Raven’s Advanced Progressive Matrices as a standard test of general fl uid intelligence. On the basis of the chronometric analysis of their performance in the analogy task, participants were divided into those preferring to use parallel or sequential modes of organization of information processing. It appeared that intelligent people using the parallel mode of processing obtained the best results in the analogical reasoning test. Other subgroups did not differ substantially from one another. It also appeared that intelligent people using the parallel mode of processing performed equally well regardless of their attentional resources and working memory capacity, whereas people using the sequential mode of processing were much more dependent on these basic cognitive limitations. A compensatory mechanism is suggested in order to account for this data: the parallel mode of processing probably helps to compensate for defi cient attention or impaired working memory, whereas the sequential mode cannot act in a compensatory way.
Źródło:
Studia Psychologiczne (Psychological Studies); 2011, 49, 5; 41-53
0081-685X
Pojawia się w:
Studia Psychologiczne (Psychological Studies)
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Depth images filtering in distributed streaming
Autorzy:
Dziubich, T.
Szymański, J.
Brzeski, A. M.
Cychnerski, J.
Korłub, W. M.
Powiązania:
https://bibliotekanauki.pl/articles/259864.pdf
Data publikacji:
2016
Wydawca:
Politechnika Gdańska. Wydział Inżynierii Mechanicznej i Okrętownictwa
Tematy:
point cloud processing
distributed system
parallel processing
depth image filtering
Opis:
In this paper, we propose a distributed system for point cloud processing and transferring them via computer network regarding to effectiveness-related requirements. We discuss the comparison of point cloud filters focusing on their usage for streaming optimization. For the filtering step of the stream pipeline processing we evaluate four filters: Voxel Grid, Radial Outliner Remover, Statistical Outlier Removal and Pass Through. For each of the filters we perform a series of tests for evaluating the impact on the point cloud size and transmitting frequency (analysed for various fps ratio). We present results of the optimization process used for point cloud consolidation in a distributed environment. We describe the processing of the point clouds before and after the transmission. Pre- and post-processing allow the user to send the cloud via network without any delays. The proposed pre-processing compression of the cloud and the post-processing reconstruction of it are focused on assuring that the end-user application obtains the cloud with a given precision.
Źródło:
Polish Maritime Research; 2016, 2; 91-98
1233-2585
Pojawia się w:
Polish Maritime Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Novel approach for big data classification based on hybrid parallel dimensionality reduction using spark cluster
Autorzy:
Ali, Ahmed Hussein
Abdullah, Mahmood Zaki
Powiązania:
https://bibliotekanauki.pl/articles/305766.pdf
Data publikacji:
2019
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
big data
dimensionality reduction
parallel processing
Spark
PCA
LDA
Opis:
The big data concept has elicited studies on how to accurately and efficiently extract valuable information from such huge dataset. The major problem during big data mining is data dimensionality due to a large number of dimensions in such datasets. This major consequence of high data dimensionality is that it affects the accuracy of machine learning (ML) classifiers; it also results in time wastage due to the presence of several redundant features in the dataset. This problem can be possibly solved using a fast feature reduction method. Hence, this study presents a fast HP-PL which is a new hybrid parallel feature reduction framework that utilizes spark to facilitate feature reduction on shared/distributed-memory clusters. The evaluation of the proposed HP-PL on KDD99 dataset showed the algorithm to be significantly faster than the conventional feature reduction techniques. The proposed technique required >1 minute to select 4 dataset features from over 79 features and 3,000,000 samples on a 3-node cluster (total of 21 cores). For the comparative algorithm, more than 2 hours was required to achieve the same feat. In the proposed system, Hadoop’s distributed file system (HDFS) was used to achieve distributed storage while Apache Spark was used as the computing engine. The model development was based on a parallel model with full consideration of the high performance and throughput of distributed computing. Conclusively, the proposed HP-PL method can achieve good accuracy with less memory and time compared to the conventional methods of feature reduction. This tool can be publicly accessed at https://github.com/ahmed/Fast-HP-PL.
Źródło:
Computer Science; 2019, 20 (4); 411-429
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Optimization of Short-Lag Spatial Coherence Imaging Method
Autorzy:
Domaradzki, Jakub
Lewandowski, Marcin
Żołek, Norbert
Powiązania:
https://bibliotekanauki.pl/articles/176815.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
short lag spatial coherence
synthetic aperture
algorithm optimization
parallel processing
Opis:
The computing performance optimization of the Short-Lag Spatial Coherence (SLSC) method applied to ultrasound data processing is presented. The method is based on the theory that signals from adjacent receivers are correlated, drawing on a simplified conclusion of the van Cittert-Zernike theorem. It has been proven that it can be successfully used in ultrasound data reconstruction with despeckling. Former works have shown that the SLSC method in its original form has two main drawbacks: time-consuming processing and low contrast in the area near the transceivers. In this study, we introduce a method that allows to overcome both of these drawbacks. The presented approach removes the dependency on distance (the “lag” parameter value) between signals used to calculate correlations. The approach has been tested by comparing results obtained with the original SLSC algorithm on data acquired from tissue phantoms. The modified method proposed here leads to constant complexity, thus execution time is independent of the lag parameter value, instead of the linear complexity. The presented approach increases computation speed over 10 times in comparison to the base SLSC algorithm for a typical lag parameter value. The approach also improves the output image quality in shallow areas and does not decrease quality in deeper areas.
Źródło:
Archives of Acoustics; 2019, 44, 4; 669-679
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Ewolucja ISA – wierzchołek góry lodowej
ISA evolution – tip of the iceberg
Autorzy:
Komorowski, W.
Powiązania:
https://bibliotekanauki.pl/articles/137202.pdf
Data publikacji:
2012
Wydawca:
Uczelnia Jana Wyżykowskiego
Tematy:
ISA
Instruction-Set Architecture
CISC
RISC
przetwarzanie równoległe
parallel processing
Opis:
Lista rozkazów stanowiąca główny atrybut architektury każdego komputera zmieniała się zależnie od dostępnej technologii i wymagań stawianych przez użytkowników. W artykule opisano kilka rozwiązań ISA (Instruction-Set Architecture) – kluczowych w historii informatyki, wskazując na uwarunkowania istniejące w czasie ich powstawania. Przedstawiono powody zmiany paradygmatu projektowania CISC-RISC w latach osiemdziesiątych. Scharakteryzowano istotę przetwarzania równoległego – od potokowości, przez superskalarność i organizacje VLIW aż do przetwarzania masywnie równoległego w obecnych superkomputerach.
Instruction-set architecture is determined by many factors, such as technology and users’ demand. The ISA evolution is illustrated on several examples – milestones in computing history: EDSAC, VAX, Berkeley RISC. The early 80’ CISC-RISC turning point in architecture paradigm is explained. A short characteristic of parallel processing is given – starting from pipelining, through superscalar and VLIW processors up to petaflops supercomputers using Massively Parallel Processing technique.
Źródło:
Zeszyty Naukowe Dolnośląskiej Wyższej Szkoły Przedsiębiorczości i Techniki. Studia z Nauk Technicznych; 2012, 1; 73-94
2299-3355
Pojawia się w:
Zeszyty Naukowe Dolnośląskiej Wyższej Szkoły Przedsiębiorczości i Techniki. Studia z Nauk Technicznych
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Toward the best combination of optimization with fuzzy systems to obtain the best solution for the GA and PSO algorithms using parallel processing
Autorzy:
Valdez, Fevrier
Kawano, Yunkio
Melin, Patricia
Powiązania:
https://bibliotekanauki.pl/articles/384329.pdf
Data publikacji:
2020
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
genetic algorithms
particle swarm optimization (PSO)
fuzzy logic
parallel processing
Opis:
In general, this paper focuses on finding the best configuration for PSO and GA, using the different migration blocks, as well as the different sets of the fuzzy systems rules. To achieve this goal, two optimization algorithms were configured in parallel to be able to integrate a migration block that allow us to generate diversity within the subpopulations used in each algorithm, which are: the particle swarm optimization (PSO) and the genetic algorithm (GA). Dynamic parameter adjustment was also performed with a fuzzy system for the parameters within the PSO algorithm, which are the following: cognitive, social and inertial weight parameter. In the GA case, only the crossover parameter was modified.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2020, 14, 1; 55-64
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A parallel genetic algorithm for creating virtual portraits of historical figures
Autorzy:
Krawczyk, H.
Proficz, J.
Ziółkowski, T.
Powiązania:
https://bibliotekanauki.pl/articles/1933983.pdf
Data publikacji:
2012
Wydawca:
Politechnika Gdańska
Tematy:
genetic algorithms
fitness function
KASKADA platform
parallel processing
high performance computing
Opis:
In this paper we present a genetic algorithm (GA) for creating hypothetical virtual portraits of historical figures and other individuals whose facial appearance is unknown. Our algorithm uses existing portraits of random people from a specific historical period and social background to evolve a set of face images potentially resembling the person whose image is to be found. We then use portraits of the person’s relatives to judge which of the evolved images are most likely to resemble his/her actual appearance. Unlike typical GAs, our algorithm uses a new supervised form of fitness function which itself is affected by the evolution process. Additional description of requested facial features can be provided to further influence the final solution (i.e. the virtual portrait). We present an example of a virtual portrait created by our algorithm. Finally, the performance of a parallel implementation developed for the KASKADA platform is presented and evaluated.
Źródło:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk; 2012, 16, 1-2; 145-162
1428-6394
Pojawia się w:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analysis of informative feature changes on color images using mass-parallel processing
Autorzy:
Doudkin, A.
Ganchenko, V.
Petrovsky, A.
Sobkowiak, B.
Powiązania:
https://bibliotekanauki.pl/articles/335285.pdf
Data publikacji:
2009
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Maszyn Rolniczych
Tematy:
obraz
analiza
roślina
choroby
mass-parallel processing
images
analysis
plant
disease
Opis:
Remote sensing methods allow effective detecting field areas that are infected by plant diseases. The infection detected on early stages of its development reduces costs of plants protective measures. In the paper the problems of disease feature extraction as well as disease identification are considered. Three groups of potato plants with 25 images in each group were under experimental observation in laboratory conditions. The proposed algorithm of automatic definition of appearance i changes has shown good result of objects identification at use of an attribute of change of color characteristics of object. The greatest influence on job of a method renders: presence in the staff of extraneous subjects and the shadows, having color of object; non-uniformity of illumination that creates additional handicaps.
Źródło:
Journal of Research and Applications in Agricultural Engineering; 2009, 54, 2; 32-36
1642-686X
2719-423X
Pojawia się w:
Journal of Research and Applications in Agricultural Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Cost-efficient project management based on critical chain method with partial availability of resources
Autorzy:
Pawiński, G.
Sapiecha, K.
Powiązania:
https://bibliotekanauki.pl/articles/205579.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Instytut Badań Systemowych PAN
Tematy:
project management and scheduling
resource allocation
resource constraints
metaheuristic algorithms
parallel processing
Opis:
Cost-efficient project management based on Critical Chain Method (CCPM) is investigated in this paper. This is a variant of the resource-constrained project scheduling problem (RCPSP) when resources are only partially available and a deadline is given, but the cost of the project should be minimized. RCPSP is a well- known NP hard problem but originally it does not take into consideration the initial resource workload. A metaheuristic algorithm driven by a metric of a gain was adapted to solve the problem when applied to CCPM. Refinement methods enhancing the quality of the results are developed. The improvement expands the search space by inserting the task in place of an already allocated task, if a better allocation can be found for it. The increase of computation time is reduced by distributed calculations. The computational experiments showed significant efficiency of the approach, in comparison with the greedy methods and with genetic algorithm, as well as high reduction of time needed to obtain the results.
Źródło:
Control and Cybernetics; 2014, 43, 1; 95-109
0324-8569
Pojawia się w:
Control and Cybernetics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Scalability evaluation of Matlab routines for parallel image processing environment
Autorzy:
Saif, J. A. M.
Sumionka, P.
Powiązania:
https://bibliotekanauki.pl/articles/1940231.pdf
Data publikacji:
2017
Wydawca:
Politechnika Gdańska
Tematy:
scalability
parallel image processing
Matlab
skalowalność
równoległe przetwarzanie obrazu
Opis:
Image edge detection plays a crucial role in image analysis and computer vision, it is defined as the process of finding the boundaries between objects within the considered image. The recognized edges may further be used in object recognition or image matching. In this paper a Canny image edge detector is used which gives acceptable results that can be utilized in many disciplines, but this technique is time-consuming especially when a big collection of images is analyzed. For that reason, to enhance the performance of the algorithms, a parallel platform allowing speeding up the computation is used. The scalability of a multicore supercomputer node, which is exploited to run the same routines for a collection of color images (from 2100 to 42000 images) is investigated.
Źródło:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk; 2017, 21, 4; 423-433
1428-6394
Pojawia się w:
TASK Quarterly. Scientific Bulletin of Academic Computer Centre in Gdansk
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Optimization of Track-Before-Detect Systems for GPGPU
Optymalizacja systemów śledzenia przed detekcją dla GPGPU
Autorzy:
Mazurek, P.
Powiązania:
https://bibliotekanauki.pl/articles/154551.pdf
Data publikacji:
2010
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
estymacja
śledzenie ruchu
równoległe przetwarzanie obrazów
GPGPU
śledzenie przed detekcją
Tracking
Parallel Image Processing
estimation
parallel image processing
Track-Before-Detect
Opis:
A computation speed of Track-Before-Detect algorithm with GPGPU implementations are compared in the paper. The conventional and subpixel variants for different thread processing block sizes are compared. Decimation of the state space for reduction of the external memory accesses is assumed. The GPGPU code profiling technique by the source code synthesis is applied for finding of the best parameters and code variants for particular GPGPU.
Systemy śledzenia oparte na schemacie śledzenia przed detekcją (TBD) umożliwiają śledzenia obiektów o niskim stosunku sygnału do szumu (SRN<1), co jest ważne dla zastosowań cywilnych i wojskowych. Konwencjonalne systemy śledzenia oparte na detekcji i śledzeniu nie są odpowiednie z uwagi na dużą ilość fałszywych lub utraconych detekcji. Najważniejszą wadą algorytmów TBD jest skala obliczeń, ponieważ wszystkie hipotezy (trajektorie) powinny być testowane, nawet jeśli nie ma obiektu w zasięgu. Proponowana metoda [8] oparta o decymację daje istotną (kilka razy) redukcję czasu przetwarzania na GPGPU. Programowalne karty graficzne (GPGPU) zawierają dużą ilość jednostek przetwarzania (procesorów strumieniowych) z bardzo małą, ale szybką pamięcią współdzieloną oraz dużą, ale bardzo wolną pamięcią globalną. Proponowana metoda [8] została w artykule przetestowana z wykorzystaniem algorytmu Spatio-Temporal TBD z dodatkowym profilowaniem kodu z wykorzystaniem platformy przetwarzania Nvidia CUDA. Kompilator CUDA jest dodatkowo używany do optymalizacji czasu przetwarzania z różnymi rozmiarami bloku przetwarzania. Przestrzeń stanów jest przetwarzana wewnętrznie z wykorzystaniem pamięci współdzielonej i przechowywana w pamięci globalnej po pewnej określonej liczbie kroków czasowych. Podejście z okienkowaniem jest używane do przetwarzania wejściowych danych pomiarowych 2D przechowywanych w pamięci globalnej.
Źródło:
Pomiary Automatyka Kontrola; 2010, R. 56, nr 7, 7; 655-667
0032-4140
Pojawia się w:
Pomiary Automatyka Kontrola
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies