Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "data reduction" wg kryterium: Temat


Tytuł:
System-level approaches to power efficiency in FPGA-based designs (data reduction algorithms case study)
Autorzy:
Czapski, P. P.
Śluzek, A.
Powiązania:
https://bibliotekanauki.pl/articles/384769.pdf
Data publikacji:
2011
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
power awareness
FPGA
system-level
Handel-C
data reduction
Opis:
In this paper we present preliminary results on systemlevel analysis of power efficiency in FPGA-based designs. Advanced FPGA devices allow implementation of sophisticated systems (e.g. embedded sensor nodes). However, designing such complex applications is prohibitively expensive at lower levels so that, moving the designing process to higher abstraction layers, i.e. system-levels of design, is a rational decision. This paper shows that at least a certain level of power awareness is achievable at these higher abstractions. A methodology and preliminary results for a power-aware, system-level algorithm partitioning is presented. We select data reduction algorithms as the case study because of their importance in wireless sensor networks (WSN's). Although, the research has been focused on WSN applications of FPGA, it is envisaged that the presented ideas are applicable to other untethered embedded systems based on FPGA's and other similar programmable devices.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2011, 5, 2; 49-59
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Data Reduction Method for Synthetic Transmit Aperture Algorithm
Autorzy:
Karwat, P.
Klimonda, Z.
Sęklewski, M.
Lewandowski, M.
Nowicki, A.
Powiązania:
https://bibliotekanauki.pl/articles/177848.pdf
Data publikacji:
2010
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
ultrasonic imaging
synthetic transmit aperture
data reduction
effective aperture
reciprocity
Opis:
Ultrasonic methods of human body internal structures imaging are being continuously enhanced. New algorithms are created to improve certain output parameters. A synthetic aperture method (SA) is an example which allows to display images at higher frame-rate than in case of conventional beam-forming method. Higher computational complexity is a limitation of SA method and it can prevent from obtaining a desired reconstruction time. This problem can be solved by neglecting a part of data. Obviously it implies a decrease of imaging quality, however a proper data reduction technique would minimize the image degradation. A proposed way of data reduction can be used with synthetic transmit aperture method (STA) and it bases on an assumption that a signal obtained from any pair of transducers is the same, no matter which transducer transmits and which receives. According to this postulate, nearly a half of the data can be ignored without image quality decrease. The presented results of simulations and measurements with use of wire and tissue phantom prove that the proposed data reduction technique reduces the amount of data to be processed by half, while maintaining resolution and allowing only a small decrease of SNR and contrast of resulting images.
Źródło:
Archives of Acoustics; 2010, 35, 4; 635-642
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
IoT sensing networks for gait velocity measurement
Autorzy:
Chou, Jyun-Jhe
Shih, Chi-Sheng
Wang, Wei-Dean
Huang, Kuo-Chin
Powiązania:
https://bibliotekanauki.pl/articles/330707.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
internet of things
IoT middleware
data fusion
data reduction
internet rzeczy
oprogramowanie pośredniczące
fuzja danych
redukcja danych
Opis:
Gait velocity has been considered the sixth vital sign. It can be used not only to estimate the survival rate of the elderly, but also to predict the tendency of falling. Unfortunately, gait velocity is usually measured on a specially designed walk path, which has to be done at clinics or health institutes. Wearable tracking services using an accelerometer or an inertial measurement unit can measure the velocity for a certain time interval, but not all the time, due to the lack of a sustainable energy source. To tackle the shortcomings of wearable sensors, this work develops a framework to measure gait velocity using distributed tracking services deployed indoors. Two major challenges are tackled in this paper. The first is to minimize the sensing errors caused by thermal noise and overlapping sensing regions. The second is to minimize the data volume to be stored or transmitted. Given numerous errors caused by remote sensing, the framework takes into account the temporal and spatial relationship among tracking services to calibrate the services systematically. Consequently, gait velocity can be measured without wearable sensors and with higher accuracy. The developed method is built on top of WuKong, which is an intelligent IoT middleware, to enable location and temporal-aware data collection. In this work, we present an iterative method to reduce the data volume collected by thermal sensors. The evaluation results show that the file size is up to 25% of that of the JPEG format when the RMSE is limited to 0.5º.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 2; 245-259
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Python Machine Learning. Dry Beans Classification Case
Autorzy:
Słowiński, Grzegorz
Powiązania:
https://bibliotekanauki.pl/articles/50091919.pdf
Data publikacji:
2024-09
Wydawca:
Warszawska Wyższa Szkoła Informatyki
Tematy:
machine learning
deep learning
data dimension reduction
activation function
Opis:
A dataset containing over 13k samples of dry beans geometric features was analyzed using machine learning (ML) and deep learning (DL) techniques with the goal to automatically classify the bean species. Performance in terms of accuracy, train and test time was analyzed. First the original dataset was reduced to eliminate redundant features (too strongly correlated and echoing others). Then the dataset was visualized and analyzed with a few shallow learning techniques and simple artificial neural network. Cross validation was used to check the learning process repeatability. Influence of data preparation (dimension reduction) on shallow learning techniques were observed. In case of Multilayer Perceptron 3 activation functions were tried: ReLu, ELU and sigmoid. Random Forest appeared to be the best model for dry beans classification task reaching average accuracy reaching 92.61% with reasonable train and test times.
Źródło:
Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki; 2024, 18, 30; 7-26
1896-396X
2082-8349
Pojawia się w:
Zeszyty Naukowe Warszawskiej Wyższej Szkoły Informatyki
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem
Autorzy:
Czarnowski, I.
Jędrzejowicz, P.
Powiązania:
https://bibliotekanauki.pl/articles/907819.pdf
Data publikacji:
2011
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
redukcja danych
komputerowe uczenie się
optymalizacja
system wieloagentowy
data reduction
machine learning
A-Team
optimization
multi-agent system
Opis:
The problem considered concerns data reduction for machine learning. Data reduction aims at deciding which features and instances from the training set should be retained for further use during the learning process. Data reduction results in increased capabilities and generalization properties of the learning model and a shorter time of the learning process. It can also help in scaling up to large data sources. The paper proposes an agent-based data reduction approach with the learning process executed by a team of agents (A-Team). Several A-Team architectures with agents executing the simulated annealing and tabu search procedures are proposed and investigated. The paper includes a detailed description of the proposed approach and discusses the results of a validating experiment.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2011, 21, 1; 57-68
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Efficient astronomical data condensation using approximate nearest neighbors
Autorzy:
Łukasik, Szymon
Lalik, Konrad
Sarna, Piotr
Kowalski, Piotr A.
Charytanowicz, Małgorzata
Kulczycki, Piotr
Powiązania:
https://bibliotekanauki.pl/articles/907932.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
big data
astronomical observation
data reduction
nearest neighbor search
kd-trees
duży zbiór danych
obserwacja astronomiczna
redukcja danych
wyszukiwanie najbliższego sąsiada
drzewo kd
Opis:
Extracting useful information from astronomical observations represents one of the most challenging tasks of data exploration. This is largely due to the volume of the data acquired using advanced observational tools. While other challenges typical for the class of big data problems (like data variety) are also present, the size of datasets represents the most significant obstacle in visualization and subsequent analysis. This paper studies an efficient data condensation algorithm aimed at providing its compact representation. It is based on fast nearest neighbor calculation using tree structures and parallel processing. In addition to that, the possibility of using approximate identification of neighbors, to even further improve the algorithm time performance, is also evaluated. The properties of the proposed approach, both in terms of performance and condensation quality, are experimentally assessed on astronomical datasets related to the GAIA mission. It is concluded that the introduced technique might serve as a scalable method of alleviating the problem of the dataset size.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 467-476
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Optimization on the complementation procedure towards efficient implementation of the index generation function
Autorzy:
Borowik, G.
Powiązania:
https://bibliotekanauki.pl/articles/330597.pdf
Data publikacji:
2018
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
data reduction
feature selection
indiscernibility matrix
logic synthesis
index generation function
redukcja danych
selekcja cech
synteza logiczna
funkcja generowania indeksów
Opis:
In the era of big data, solutions are desired that would be capable of efficient data reduction. This paper presents a summary of research on an algorithm for complementation of a Boolean function which is fundamental for logic synthesis and data mining. Successively, the existing problems and their proposed solutions are examined, including the analysis of current implementations of the algorithm. Then, methods to speed up the computation process and efficient parallel implementation of the algorithm are shown; they include optimization of data representation, recursive decomposition, merging, and removal of redundant data. Besides the discussion of computational complexity, the paper compares the processing times of the proposed solution with those for the well-known analysis and data mining systems. Although the presented idea is focused on searching for all possible solutions, it can be restricted to finding just those of the smallest size. Both approaches are of great application potential, including proving mathematical theorems, logic synthesis, especially index generation functions, or data processing and mining such as feature selection, data discretization, rule generation, etc. The problem considered is NP-hard, and it is easy to point to examples that are not solvable within the expected amount of time. However, the solution allows the barrier of computations to be moved one step further. For example, the unique algorithm can calculate, as the only one at the moment, all minimal sets of features for few standard benchmarks. Unlike many existing methods, the algorithm additionally works with undetermined values. The result of this research is an easily extendable experimental software that is the fastest among the tested solutions and the data mining systems.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2018, 28, 4; 803-815
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An effective data reduction model for machine emergency state detection from big data tree topology structures
Autorzy:
Iaremko, Iaroslav
Senkerik, Roman
Jasek, Roman
Lukastik, Petr
Powiązania:
https://bibliotekanauki.pl/articles/2055178.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
OPC UA
OPC tree
principal component analysis
PCA
big data analysis
data reduction
machine tool
anomaly detection
emergency states
analiza głównych składowych
duży zbiór danych
redukcja danych
wykrywanie anomalii
stan nadzwyczajny
Opis:
This work presents an original model for detecting machine tool anomalies and emergency states through operation data processing. The paper is focused on an elastic hierarchical system for effective data reduction and classification, which encompasses several modules. Firstly, principal component analysis (PCA) is used to perform data reduction of many input signals from big data tree topology structures into two signals representing all of them. Then the technique for segmentation of operating machine data based on dynamic time distortion and hierarchical clustering is used to calculate signal accident characteristics using classifiers such as the maximum level change, a signal trend, the variance of residuals, and others. Data segmentation and analysis techniques enable effective and robust detection of operating machine tool anomalies and emergency states due to almost real-time data collection from strategically placed sensors and results collected from previous production cycles. The emergency state detection model described in this paper could be beneficial for improving the production process, increasing production efficiency by detecting and minimizing machine tool error conditions, as well as improving product quality and overall equipment productivity. The proposed model was tested on H-630 and H-50 machine tools in a real production environment of the Tajmac-ZPS company.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2021, 31, 4; 601--611
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
On a book Algorithms for data science by Brian Steele, John Chandler and Swarn Reddy
Autorzy:
Szajowski, Krzysztof J.
Powiązania:
https://bibliotekanauki.pl/articles/747695.pdf
Data publikacji:
2017
Wydawca:
Polskie Towarzystwo Matematyczne
Tematy:
histogram
algorytm centroidów
Algorithms
Associative Statistics
Computation
Computing Similarity
Cluster Analysis
Correlation
Data Reduction
Data Mapping
Data Dictionary
Data Visualization
Forecasting
Hadoop
Histogram
k-Means Algorithm
k-Nearest Neighbor Prediction
Algorytmy
miary zależności
obliczenia
analiza skupień
korelacja
redukcja danych
transformacja danych
wizualizacja danych
prognozowanie
algorytm k-średnich
algorytm k najbliższych sąsiadów
Opis:
Przedstawiona tutaj pozycja wydawnicza jest obszernym wprowadzeniem do najważniejszych podstawowych zasad, algorytmów i danych wraz zestrukturami, do których te zasady i algorytmy się odnoszą. Przedstawione zaganienia są wstępem do rozważań w dziedzinie informatyki. Jednakże, to algorytmy są podstawą analityki danych i punktem skupienia tego podręcznika. Pozyskiwanie wiedzy z danych wymaga wykorzystania metod i rezultatów z co najmniej trzech dziedzin: matematyki, statystyki i informatyki. Książka zawiera jasne i intuicyjne objaśnienia matematyczne i statystyczne poszczególnych zagadnień, przez co algorytmy są naturalne i przejrzyste. Praktyka analizy danych wymaga jednak więcej niż tylko dobrych podstaw naukowych, ścisłości matematycznej i spojrzenia od strony metodologii statystycznej. Zagadnienia generujące dane są ogromnie zmienne, a dopasowanie metod pozyskiwania wiedzy może być przeprowadzone tylko w najbardziej podstawowych algorytmach. Niezbędna jest płynność programowania i doświadczenie z rzeczywistymi problemami. Czytelnik jest prowadzony przez zagadnienia algorytmiczne z wykorzystaniem Pythona i R na bazie rzeczywistych problemów i  analiz danych generowanych przez te zagadnienia. Znaczną część materiału zawartego w książce mogą przyswoić również osoby bez znajomości zaawansowanej metodologii. To powoduje, że książka może być przewodnikiem w jedno lub dwusemestralnym kursie analityki danych dla studentów wyższych lat studiów matematyki, statystyki i informatyki. Ponieważ wymagana wiedza wstępna nie jest zbyt obszerna,  studenci po kursie z probabilistyki lub statystyki, ze znajomością podstaw algebry i analizy matematycznej oraz po kurs programowania nie będą mieć problemów, tekst doskonale nadaje się także do samodzielnego studiowania przez absolwentów kierunków ścisłych. Podstawowy materiał jest dobrze ilustrowany obszernymi zagadnieniami zaczerpniętymi z rzeczywistych problemów. Skojarzona z książką strona internetowa wspiera czytelnika danymi wykorzystanymi w książce, a także prezentacją wybranych fragmentów wykładu. Jestem przekonany, że tematem książki jest nowa dziedzina nauki. 
The book under review gives a comprehensive presentation of data science algorithms, which means on practical data analytics unites fundamental principles, algorithms, and data. Algorithms are the keystone of data analytics and the focal point of this textbook. The data science, as the authors claim, is the discipline since 2001. However, informally it worked before that date (cf. Cleveland(2001)). The crucial role had the graphic presentation of the data as the visualization of the knowledge hidden in the data.  It is the discipline which covers the data mining as the tool or important topic. The escalating demand for insights into big data requires a fundamentally new approach to architecture, tools, and practices. It is why the term data science is useful. It underscores the centrality of data in the investigation because they store of potential value in the field of action. The label science invokes certain very real concepts within it, like the notion of public knowledge and peer review. This point of view makes that the data science is not a new idea. It is part of a continuum of serious thinking dates back hundreds of years. The good example of results of data science is the Benford law (see Arno Berger and Theodore P. Hill(2015, 2017). In an effort to identifying some of the best-known algorithms that have been widely used in the data mining community, the IEEE International Conference on Data Mining (ICDM) has identified the top 10 algorithms in data mining for presentation at ICDM '06 in Hong Kong. This panel will announce the top 10 algorithms and discuss the impact and further research of each of these 10 algorithms in 2006. In the present book, there are clear and intuitive explanations of the mathematical and statistical foundations make the algorithms transparent. Most of the algorithms announced by IEEE in 2006 are included. But practical data analytics requires more than just the foundations. Problems and data are enormously variable and only the most elementary of algorithms can be used without modification. Programming fluency and experience with real and challenging data are indispensable and so the reader is immersed in Python and R and real data analysis. By the end of the book, the reader will have gained the ability to adapt algorithms to new problems and carry out innovative analysis.
Źródło:
Mathematica Applicanda; 2017, 45, 2
1730-2668
2299-4009
Pojawia się w:
Mathematica Applicanda
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Novel approach for big data classification based on hybrid parallel dimensionality reduction using spark cluster
Autorzy:
Ali, Ahmed Hussein
Abdullah, Mahmood Zaki
Powiązania:
https://bibliotekanauki.pl/articles/305766.pdf
Data publikacji:
2019
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
big data
dimensionality reduction
parallel processing
Spark
PCA
LDA
Opis:
The big data concept has elicited studies on how to accurately and efficiently extract valuable information from such huge dataset. The major problem during big data mining is data dimensionality due to a large number of dimensions in such datasets. This major consequence of high data dimensionality is that it affects the accuracy of machine learning (ML) classifiers; it also results in time wastage due to the presence of several redundant features in the dataset. This problem can be possibly solved using a fast feature reduction method. Hence, this study presents a fast HP-PL which is a new hybrid parallel feature reduction framework that utilizes spark to facilitate feature reduction on shared/distributed-memory clusters. The evaluation of the proposed HP-PL on KDD99 dataset showed the algorithm to be significantly faster than the conventional feature reduction techniques. The proposed technique required >1 minute to select 4 dataset features from over 79 features and 3,000,000 samples on a 3-node cluster (total of 21 cores). For the comparative algorithm, more than 2 hours was required to achieve the same feat. In the proposed system, Hadoop’s distributed file system (HDFS) was used to achieve distributed storage while Apache Spark was used as the computing engine. The model development was based on a parallel model with full consideration of the high performance and throughput of distributed computing. Conclusively, the proposed HP-PL method can achieve good accuracy with less memory and time compared to the conventional methods of feature reduction. This tool can be publicly accessed at https://github.com/ahmed/Fast-HP-PL.
Źródło:
Computer Science; 2019, 20 (4); 411-429
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multilinear Filtering Based on a Hierarchical Structure of Covariance Matrices
Autorzy:
Szwabe, Andrzej
Ciesielczyk, Michal
Misiorek, Pawel
Powiązania:
https://bibliotekanauki.pl/articles/1373696.pdf
Data publikacji:
2015
Wydawca:
Uniwersytet Jagielloński. Wydawnictwo Uniwersytetu Jagiellońskiego
Tematy:
tensor-based data modeling
multilinear PCA
random indexing
dimensionality reduction
multilinear data filtering
higher-order SVD
Opis:
We propose a novel model of multilinear filtering based on a hierarchical structure of covariance matrices – each matrix being extracted from the input tensor in accordance to a specific set-theoretic model of data generalization, such as derivation of expectation values. The experimental analysis results presented in this paper confirm that the investigated approaches to tensor-based data representation and processing outperform the standard collaborative filtering approach in the ‘cold-start’ personalized recommendation scenario (of very sparse input data). Furthermore, it has been shown that the proposed method is superior to standard tensor-based frameworks such as N-way Random Indexing (NRI) and Higher-Order Singular Value Decomposition (HOSVD) in terms of both the AUROC measure and computation time.
Źródło:
Schedae Informaticae; 2015, 24; 103-112
0860-0295
2083-8476
Pojawia się w:
Schedae Informaticae
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comparative study for outlier detection methods in high dimensional text data
Autorzy:
Park, Cheong Hee
Powiązania:
https://bibliotekanauki.pl/articles/2201316.pdf
Data publikacji:
2023
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
curse of dimensionality
dimension reduction
high dimensional text data
outlier detection
Opis:
Outlier detection aims to find a data sample that is significantly different from other data samples. Various outlier detection methods have been proposed and have been shown to be able to detect anomalies in many practical problems. However, in high dimensional data, conventional outlier detection methods often behave unexpectedly due to a phenomenon called the curse of dimensionality. In this paper, we compare and analyze outlier detection performance in various experimental settings, focusing on text data with dimensions typically in the tens of thousands. Experimental setups were simulated to compare the performance of outlier detection methods in unsupervised versus semisupervised mode and uni-modal versus multi-modal data distributions. The performance of outlier detection methods based on dimension reduction is compared, and a discussion on using k-NN distance in high dimensional data is also provided. Analysis through experimental comparison in various environments can provide insights into the application of outlier detection methods in high dimensional data.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2023, 13, 1; 5--17
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Implementation of the attribute significance analysis with the use of soft reduction of attributes in the rough set theory on the basis of the SQL mechanisms
Autorzy:
Nozdrzykowski, Ł.
Nozdrzykowska, M.
Powiązania:
https://bibliotekanauki.pl/articles/114726.pdf
Data publikacji:
2017
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
rough set theory
data analysis
soft reduction of attributes
SQL implementation
Opis:
This article presents a way to use databases supporting the SQL and PL/SQL in the implementation of a method of attribute significance analysis with the use of soft reduction of attributes in the rough set theory. A number of SQL queries are presented, which facilitate the implementation. The original mechanisms presented previously [1] are supplemented with queries which facilitate the execution of attribute coding. The authors present a complete implementation of the method, from the coding of attributes to the determination of the significance of conditional attributes. Application of queries to the database eliminates the necessity to build data grouping and data mining mechanisms and calculation of repetitions of identical rules in the reduced decision rule space. Without the support of a database, the creation of universal data grouping and data mining mechanisms which could be used with any number of attributes is a challenging task.
Źródło:
Measurement Automation Monitoring; 2017, 63, 1; 10-14
2450-2855
Pojawia się w:
Measurement Automation Monitoring
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An algorithm for reducing the dimension and size of a sample for data exploration procedures
Autorzy:
Kulczycki, P.
Łukasik, S.
Powiązania:
https://bibliotekanauki.pl/articles/330110.pdf
Data publikacji:
2014
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
dimension reduction
sample size reduction
linear transformation
simulated annealing
data mining
redukcja wymiaru
transformacja liniowa
wyżarzanie symulowane
eksploracja danych
Opis:
The paper deals with the issue of reducing the dimension and size of a data set (random sample) for exploratory data analysis procedures. The concept of the algorithm investigated here is based on linear transformation to a space of a smaller dimension, while retaining as much as possible the same distances between particular elements. Elements of the transformation matrix are computed using the metaheuristics of parallel fast simulated annealing. Moreover, elimination of or a decrease in importance is performed on those data set elements which have undergone a significant change in location in relation to the others. The presented method can have universal application in a wide range of data exploration problems, offering flexible customization, possibility of use in a dynamic data environment, and comparable or better performance with regards to the principal component analysis. Its positive features were verified in detail for the domain’s fundamental tasks of clustering, classification and detection of atypical elements (outliers).
Źródło:
International Journal of Applied Mathematics and Computer Science; 2014, 24, 1; 133-149
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Presenting a technique for registering images and range data using a topological representation of a path within an environment
Autorzy:
Ferreira, F.
Davim, L.
Rocha, R.
Dias, J.
Santos, V.
Powiązania:
https://bibliotekanauki.pl/articles/385035.pdf
Data publikacji:
2007
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
sensor feature integration
binary data
Bernoulli mixture model
dimensionality reduction
robot localisation
change detection
Opis:
This article presents a novel method to the utilize topological representation of a path, thatpath that is created from sequences of images from digital cameras and sensor data from range sensors. A topological representation of the environment is created by leading the robot around the environment during a familiarisation phaseLeading the robot around the environment during a familiarisation phase creates a topological representation of the environment. While moving down the same path, the robot is able to localise itself within the topological representation that is has been previously created. The principal contribution to the state of the art is that, by using a topological representation of the environment, individual 3D data sets acquired from a set of range sensors need not be registered in a single, [Global] Coordinate Reference System. Instead, 3D point clouds for small sections of the environment are indexed to a sequence of multi-sensor views, of images and range data. Such a registration procedure can be useful in the construction of 3D representations of large environments and in the detection of changes that might occur within these environments.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2007, 1, 3; 47-56
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies