Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "data reduction" wg kryterium: Temat


Tytuł:
System-level approaches to power efficiency in FPGA-based designs (data reduction algorithms case study)
Autorzy:
Czapski, P. P.
Śluzek, A.
Powiązania:
https://bibliotekanauki.pl/articles/384769.pdf
Data publikacji:
2011
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
power awareness
FPGA
system-level
Handel-C
data reduction
Opis:
In this paper we present preliminary results on systemlevel analysis of power efficiency in FPGA-based designs. Advanced FPGA devices allow implementation of sophisticated systems (e.g. embedded sensor nodes). However, designing such complex applications is prohibitively expensive at lower levels so that, moving the designing process to higher abstraction layers, i.e. system-levels of design, is a rational decision. This paper shows that at least a certain level of power awareness is achievable at these higher abstractions. A methodology and preliminary results for a power-aware, system-level algorithm partitioning is presented. We select data reduction algorithms as the case study because of their importance in wireless sensor networks (WSN's). Although, the research has been focused on WSN applications of FPGA, it is envisaged that the presented ideas are applicable to other untethered embedded systems based on FPGA's and other similar programmable devices.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2011, 5, 2; 49-59
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Data Reduction Method for Synthetic Transmit Aperture Algorithm
Autorzy:
Karwat, P.
Klimonda, Z.
Sęklewski, M.
Lewandowski, M.
Nowicki, A.
Powiązania:
https://bibliotekanauki.pl/articles/177848.pdf
Data publikacji:
2010
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
ultrasonic imaging
synthetic transmit aperture
data reduction
effective aperture
reciprocity
Opis:
Ultrasonic methods of human body internal structures imaging are being continuously enhanced. New algorithms are created to improve certain output parameters. A synthetic aperture method (SA) is an example which allows to display images at higher frame-rate than in case of conventional beam-forming method. Higher computational complexity is a limitation of SA method and it can prevent from obtaining a desired reconstruction time. This problem can be solved by neglecting a part of data. Obviously it implies a decrease of imaging quality, however a proper data reduction technique would minimize the image degradation. A proposed way of data reduction can be used with synthetic transmit aperture method (STA) and it bases on an assumption that a signal obtained from any pair of transducers is the same, no matter which transducer transmits and which receives. According to this postulate, nearly a half of the data can be ignored without image quality decrease. The presented results of simulations and measurements with use of wire and tissue phantom prove that the proposed data reduction technique reduces the amount of data to be processed by half, while maintaining resolution and allowing only a small decrease of SNR and contrast of resulting images.
Źródło:
Archives of Acoustics; 2010, 35, 4; 635-642
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Zastosowanie pakietu NVivo w analizie materiałów nieustrukturyzowanych
Computer Aided Qualitative Research Using NVivo in unstructured data analysis
Autorzy:
Brosz, Maciej
Powiązania:
https://bibliotekanauki.pl/articles/1373761.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Łódzki. Wydawnictwo Uniwersytetu Łódzkiego
Tematy:
CAQDAS
loss of information
data reduction
coding procedures
NVivo
grounded theory
Opis:
This paper concerns using NVivo software in qualitative data analysis. Main subject refers to the data reduction accompanying the process of qualitative data analysis. Using software does not necessarily cause the uncontrolled modifications of data, thereby, the loss of relevant aspects of collected data. The latest version of CAQDAS (i.e., NVivo 8, 9) enables the possibility of coding on barely altered so¬urces. The paper presents examples of coding procedures on texts, pictures, audio-visual recordings. Additionally, the paper includes description of some techniques aiding the coding process.
Źródło:
Przegląd Socjologii Jakościowej; 2012, 8, 1; 98-125
1733-8069
Pojawia się w:
Przegląd Socjologii Jakościowej
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
IoT sensing networks for gait velocity measurement
Autorzy:
Chou, Jyun-Jhe
Shih, Chi-Sheng
Wang, Wei-Dean
Huang, Kuo-Chin
Powiązania:
https://bibliotekanauki.pl/articles/330707.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
internet of things
IoT middleware
data fusion
data reduction
internet rzeczy
oprogramowanie pośredniczące
fuzja danych
redukcja danych
Opis:
Gait velocity has been considered the sixth vital sign. It can be used not only to estimate the survival rate of the elderly, but also to predict the tendency of falling. Unfortunately, gait velocity is usually measured on a specially designed walk path, which has to be done at clinics or health institutes. Wearable tracking services using an accelerometer or an inertial measurement unit can measure the velocity for a certain time interval, but not all the time, due to the lack of a sustainable energy source. To tackle the shortcomings of wearable sensors, this work develops a framework to measure gait velocity using distributed tracking services deployed indoors. Two major challenges are tackled in this paper. The first is to minimize the sensing errors caused by thermal noise and overlapping sensing regions. The second is to minimize the data volume to be stored or transmitted. Given numerous errors caused by remote sensing, the framework takes into account the temporal and spatial relationship among tracking services to calibrate the services systematically. Consequently, gait velocity can be measured without wearable sensors and with higher accuracy. The developed method is built on top of WuKong, which is an intelligent IoT middleware, to enable location and temporal-aware data collection. In this work, we present an iterative method to reduce the data volume collected by thermal sensors. The evaluation results show that the file size is up to 25% of that of the JPEG format when the RMSE is limited to 0.5º.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 2; 245-259
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Python Machine Learning. Dry Beans Classification Case
Autorzy:
Słowiński, Grzegorz
Powiązania:
https://bibliotekanauki.pl/articles/50091919.pdf
Data publikacji:
2024-09
Wydawca:
Warszawska Wyższa Szkoła Informatyki
Tematy:
machine learning
deep learning
data dimension reduction
activation function
Opis:
A dataset containing over 13k samples of dry beans geometric features was analyzed using machine learning (ML) and deep learning (DL) techniques with the goal to automatically classify the bean species. Performance in terms of accuracy, train and test time was analyzed. First the original dataset was reduced to eliminate redundant features (too strongly correlated and echoing others). Then the dataset was visualized and analyzed with a few shallow learning techniques and simple artificial neural network. Cross validation was used to check the learning process repeatability. Influence of data preparation (dimension reduction) on shallow learning techniques were observed. In case of Multilayer Perceptron 3 activation functions were tried: ReLu, ELU and sigmoid. Random Forest appeared to be the best model for dry beans classification task reaching average accuracy reaching 92.61% with reasonable train and test times.
Źródło:
Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki; 2024, 18, 30; 7-26
1896-396X
2082-8349
Pojawia się w:
Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analiza czynnikowa zdjęć wielospektralnych
Principal component analysis of multispectral images
Autorzy:
Czapski, P.
Kotlarz, J.
Kubiak, K.
Tkaczyk, M.
Powiązania:
https://bibliotekanauki.pl/articles/213759.pdf
Data publikacji:
2014
Wydawca:
Sieć Badawcza Łukasiewicz - Instytut Lotnictwa
Tematy:
PCA
metody statystyczne
bioróżnorodność
krzywe blasku
redukcja danych
statistical methods
biodiversity
light curves
data reduction
Opis:
Analiza zdjęć wielospektralnych sprowadza się często do modelowania matematycznego opartego o wielowymiarowe przestrzenie metryczne, w których umieszcza się pozyskane za pomocą sensorów dane. Tego typu bardzo intuicyjne, łatwe do zaaplikowania w algorytmice analizy obrazu postępowanie może skutkować zupełnie niepotrzebnym wzrostem niezbędnej do analiz zdjęć mocy obliczeniowej. Jedną z ogólnie przyjętych grup metod analizy zbiorów danych tego typu są metody analizy czynnikowej. Wpracy tej prezentujemy dwie z nich: Principal Component Analysis (PCA) oraz Simplex Shrink-Wrapping (SSW). Użyte jednocześnie obniżają znacząco wymiar zadanej przestrzeni metrycznej pozwalając odnaleźć w danych wielospektralnych charakterystyczne składowe, czyli przeprowadzić cały proces detekcji fotografowanych obiektów. W roku 2014 w Pracowni Przetwarzania Danych Instytutu Lotnictwa oraz Zakładzie Ochrony Lasu Instytutu Badawczego Leśnictwa metodykę tą równie skutecznie przyjęto dla analizy dwóch niezwykle różnych serii zdjęć wielospektralnych: detekcji głównych składowych powierzchni Marsa (na podstawie zdjęć wielospektralnych pozyskanych w ramach misji EPOXI, NASA) oraz oszacowania bioróżnorodności jednej z leśnych powierzchni badawczych projektu HESOFF.
Mostly, analysis of multispectral images employs mathematical modeling based on multidimensional metric spaces that includes collected by the sensors data. Such an intuitive approach easily applicable to image analysis applications can result in unnecessary computing power increase required by this analysis. One of the groups of generally accepted methods of analysis of data sets are factor analysis methods. Two such factor analysis methods are presented in this paper, i.e. Principal Component Analysis (PCA ) and Simplex Shrink - Wrapping (SSW). If they are used together dimensions of a metric space can be reduced significantly allowing characteristic components to be found in multispectral data, i.e. to carry out the whole detection process of investigated images. In 2014 such methodology was adopted by Data Processing Department of the Institute of Aviation and Division of Forest Protection of Forest Research Institute for the analysis of the two very different series of multispectral images: detection of major components of the Mars surface (based on multispectral images obtained from the epoxy mission, NASA) and biodiversity estimation of one of the investigated in the HESOFF project forest complexes.
Źródło:
Prace Instytutu Lotnictwa; 2014, 1 (234) March 2014; 143-150
0509-6669
2300-5408
Pojawia się w:
Prace Instytutu Lotnictwa
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem
Autorzy:
Czarnowski, I.
Jędrzejowicz, P.
Powiązania:
https://bibliotekanauki.pl/articles/907819.pdf
Data publikacji:
2011
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
redukcja danych
komputerowe uczenie się
optymalizacja
system wieloagentowy
data reduction
machine learning
A-Team
optimization
multi-agent system
Opis:
The problem considered concerns data reduction for machine learning. Data reduction aims at deciding which features and instances from the training set should be retained for further use during the learning process. Data reduction results in increased capabilities and generalization properties of the learning model and a shorter time of the learning process. It can also help in scaling up to large data sources. The paper proposes an agent-based data reduction approach with the learning process executed by a team of agents (A-Team). Several A-Team architectures with agents executing the simulated annealing and tabu search procedures are proposed and investigated. The paper includes a detailed description of the proposed approach and discusses the results of a validating experiment.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2011, 21, 1; 57-68
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Efficient astronomical data condensation using approximate nearest neighbors
Autorzy:
Łukasik, Szymon
Lalik, Konrad
Sarna, Piotr
Kowalski, Piotr A.
Charytanowicz, Małgorzata
Kulczycki, Piotr
Powiązania:
https://bibliotekanauki.pl/articles/907932.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
big data
astronomical observation
data reduction
nearest neighbor search
kd-trees
duży zbiór danych
obserwacja astronomiczna
redukcja danych
wyszukiwanie najbliższego sąsiada
drzewo kd
Opis:
Extracting useful information from astronomical observations represents one of the most challenging tasks of data exploration. This is largely due to the volume of the data acquired using advanced observational tools. While other challenges typical for the class of big data problems (like data variety) are also present, the size of datasets represents the most significant obstacle in visualization and subsequent analysis. This paper studies an efficient data condensation algorithm aimed at providing its compact representation. It is based on fast nearest neighbor calculation using tree structures and parallel processing. In addition to that, the possibility of using approximate identification of neighbors, to even further improve the algorithm time performance, is also evaluated. The properties of the proposed approach, both in terms of performance and condensation quality, are experimentally assessed on astronomical datasets related to the GAIA mission. It is concluded that the introduced technique might serve as a scalable method of alleviating the problem of the dataset size.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 467-476
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Optimization on the complementation procedure towards efficient implementation of the index generation function
Autorzy:
Borowik, G.
Powiązania:
https://bibliotekanauki.pl/articles/330597.pdf
Data publikacji:
2018
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
data reduction
feature selection
indiscernibility matrix
logic synthesis
index generation function
redukcja danych
selekcja cech
synteza logiczna
funkcja generowania indeksów
Opis:
In the era of big data, solutions are desired that would be capable of efficient data reduction. This paper presents a summary of research on an algorithm for complementation of a Boolean function which is fundamental for logic synthesis and data mining. Successively, the existing problems and their proposed solutions are examined, including the analysis of current implementations of the algorithm. Then, methods to speed up the computation process and efficient parallel implementation of the algorithm are shown; they include optimization of data representation, recursive decomposition, merging, and removal of redundant data. Besides the discussion of computational complexity, the paper compares the processing times of the proposed solution with those for the well-known analysis and data mining systems. Although the presented idea is focused on searching for all possible solutions, it can be restricted to finding just those of the smallest size. Both approaches are of great application potential, including proving mathematical theorems, logic synthesis, especially index generation functions, or data processing and mining such as feature selection, data discretization, rule generation, etc. The problem considered is NP-hard, and it is easy to point to examples that are not solvable within the expected amount of time. However, the solution allows the barrier of computations to be moved one step further. For example, the unique algorithm can calculate, as the only one at the moment, all minimal sets of features for few standard benchmarks. Unlike many existing methods, the algorithm additionally works with undetermined values. The result of this research is an easily extendable experimental software that is the fastest among the tested solutions and the data mining systems.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2018, 28, 4; 803-815
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An effective data reduction model for machine emergency state detection from big data tree topology structures
Autorzy:
Iaremko, Iaroslav
Senkerik, Roman
Jasek, Roman
Lukastik, Petr
Powiązania:
https://bibliotekanauki.pl/articles/2055178.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
OPC UA
OPC tree
principal component analysis
PCA
big data analysis
data reduction
machine tool
anomaly detection
emergency states
analiza głównych składowych
duży zbiór danych
redukcja danych
wykrywanie anomalii
stan nadzwyczajny
Opis:
This work presents an original model for detecting machine tool anomalies and emergency states through operation data processing. The paper is focused on an elastic hierarchical system for effective data reduction and classification, which encompasses several modules. Firstly, principal component analysis (PCA) is used to perform data reduction of many input signals from big data tree topology structures into two signals representing all of them. Then the technique for segmentation of operating machine data based on dynamic time distortion and hierarchical clustering is used to calculate signal accident characteristics using classifiers such as the maximum level change, a signal trend, the variance of residuals, and others. Data segmentation and analysis techniques enable effective and robust detection of operating machine tool anomalies and emergency states due to almost real-time data collection from strategically placed sensors and results collected from previous production cycles. The emergency state detection model described in this paper could be beneficial for improving the production process, increasing production efficiency by detecting and minimizing machine tool error conditions, as well as improving product quality and overall equipment productivity. The proposed model was tested on H-630 and H-50 machine tools in a real production environment of the Tajmac-ZPS company.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2021, 31, 4; 601--611
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies