Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "product data" wg kryterium: Temat


Wyświetlanie 1-11 z 11
Tytuł:
Empirical aspects on defining product data for rapid productisation
Autorzy:
Nikolaus Kangas, Nikolaus Kangas
Kropsu-Vehkapera, Hanna
Haapasalo, Harri
Kinnunen, Kinnunen
Powiązania:
https://bibliotekanauki.pl/articles/633747.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
product data
product data management
product definition
productisation
rapid productisation
information systems
synergy
research
Opis:
Purpose – Emerging customer needs are calling for companies to quickly create new solutions in business front-end e.g. in sales situation. However, bypassing predefined product development processes and thus product definition turns to be problematic and leads later problems in product management. The main objective is to study what kind of special issues is related on defining product data when rapidly productising products.Design/methodology/approach – This is a  qualitative case study including three descriptive company cases. The study is conducted to show special characteristics that occur when rapidly defining products and their product data.Findings – The results indicate that the practices to carry out rapid productisation (RP) are very company specific. However, three common forms for RP can be recognised. It can be concluded that product data management needs in the productisation process are dependent on product structure and an original customer order point (COP) of the products. This study analyses the link between COP and the type of module added in rapid productisation and how it affects the product data handled.Practical implications – By focusing the relevant product data companies can hasten rapid productisation and ensure sufficient product management during order-delivery process.Originality; The concept of rapid productisation itself is quite novel although acute issue in practice. This research gives empirical insight about essential product data aspects when rapidly productising a  new item.
Źródło:
International Journal of Synergy and Research; 2013, 2, 1-2
2083-0025
Pojawia się w:
International Journal of Synergy and Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Development of integrated product model for supporting production monitoring system on an automobile assembly industry
Autorzy:
Raharno, S.
Martawirya, Y. Y.
Powiązania:
https://bibliotekanauki.pl/articles/246032.pdf
Data publikacji:
2011
Wydawca:
Instytut Techniczny Wojsk Lotniczych
Tematy:
integrated product data model
production monitoring system
production information system
automobile assembly industry
Opis:
Making a good production planning is a first step to produce products in effective and efficient ways. A good production planning needs information about the real conditions in the shop-floor. Without knowing the actual conditions in the shop-floor, it will be difficult to plan production well. In order to know the conditions of the shop-floor, a production monitoring system is needed. This paper deals with development of integrated product model for supporting production monitoring system on an automobile assembly industry. The integration of information between the production controls with the shop-floor is needed to build a production monitoring system. One way to integrate the two systems is by using a product model that is a representation of a real product. In general, the research method used in this research is the development of product models and other related models. The modelling concept used in this research is object oriented modelling. After that, this research also has developed a database in accordance with the developed models and interfaces used to manipulate them. The main proposed models in this research are the product type model and the product model. The product type model is the product model that represents the product designs. On the other hand, the product model is a model that represents the actual products. If the data on the product type model is relatively static, then the data on the product model is dynamic depending on the state of the real product. Each product model will have production sequence processes in accordance with the sequence processes of the represented product. Each time a process of real product is finished, and then the related process model in the product model will be updated. The condition of the processes that has occurred in the shop-floor will be known by manipulating data of the product models.
Źródło:
Journal of KONES; 2011, 18, 3; 367-374
1231-4005
2354-0133
Pojawia się w:
Journal of KONES
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Identifying the cognitive gap in the causes of product name ambiguity in e-commerce
Autorzy:
Niemir, Maciej
Mrugalska, Beata
Powiązania:
https://bibliotekanauki.pl/articles/2203762.pdf
Data publikacji:
2022
Wydawca:
Wyższa Szkoła Logistyki
Tematy:
basic product data
product catalog
e-data catalog
master data
data quality
dirty data
podstawowe dane produktowe
katalog produktów
katalog e-danych
dane podstawowe
jakość danych
dane brudne
Opis:
Background: Global product identification standards and methods of product data exchange are known and widespread in the traditional market. However, it turns out that the e-commerce market needs data that have not already received much attention, for which no standards have been established in relation to their content. Furthermore, their current quality is often perceived below expectations. This paper discusses the issues of product name and highlights its problems in the context of e-commerce. Attention is also drawn to the source of liability for erroneous data. Methods: The research methodology is based on the analysis of data of products available on the Internet through product catalog services, online stores, and e-marketplaces, mainly in Poland, but addresses a global problem. Three research scenarios were chosen, comparing product names aggregated by GTIN, starting with e-commerce sites and ending with product catalogs working with manufacturers. In addition, a scenario of name-photo compatibility was included. Results: The results show that the product name, which in the real world is an integral part of the product as it appears on the label provided by the manufacturer, in the virtual world is an attribute consciously or not modified by the service provider. It turns out that discrepancies appear already at the source - at the manufacturer's level - publishing different names for the same product when working with data catalogs or publishing on product pages contributing to the so-called snowball effect. Conclusions: On the Internet, products do not have a fixed name that fully describes the product, which causes problems in uniquely identifying the same products in different data sources. This in turn reduces the quality of data aggregation, search, and reliability. This state of affairs is not solely the responsibility of e-commerce marketplace vendors, but of the manufacturers themselves, who do not take care to publicize the unambiguous and permanent name of their products in digital form. Moreover, there are no unambiguous global guidelines for the construction of a full product name. The lack of such a template encourages individual interpretations of how to describe a product.
Źródło:
LogForum; 2022, 18, 3; 357--364
1734-459X
Pojawia się w:
LogForum
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Identifying part commonalities in a manufacturing company database
Autorzy:
Kwapisz, J.
Infante, V.
Powiązania:
https://bibliotekanauki.pl/articles/94923.pdf
Data publikacji:
2016
Wydawca:
Szkoła Główna Gospodarstwa Wiejskiego w Warszawie. Wydawnictwo Szkoły Głównej Gospodarstwa Wiejskiego w Warszawie
Tematy:
commonality
data mining
industrial database analysis
part reuse
product platform
product family
Opis:
Manufacturing companies that produce and assemble multiple products rely on databases containing thousands or even millions of parts. These databases are expensive to support, maintain and the inherent complexity does not allow end users to utilize fully such databases. Designers and engineers are often not able to find previously created parts, which they could potentially reuse, and they add one more part to the database. Engineered improvements without removal of the previous version of the component also cause the avoidable increase of elements in the database. Reuse of parts or planned development of common parts across products brings many benefits for manufacturers. Search algorithm utilized across part databases and varying projects allows identifying similar parts. The goal is to compare part names and attributes resulting in the assignment of a similarity score. Determining common and differentiating part attributes and characteristics between pairs of components allows nominating parts that can become shared in different products. The case study utilizes an industrial example to evaluate and assess the feasibility of the proposed method for identifying commonality opportunities. It turned out that it is possible to find many parts that can be potentially shared between different products.
Źródło:
Information Systems in Management; 2016, 5, 3; 336-346
2084-5537
2544-1728
Pojawia się w:
Information Systems in Management
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Quality adjusted GEKS-type indices for price comparisons based on scanner data
Autorzy:
Białek, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/18105175.pdf
Data publikacji:
2023-06-13
Wydawca:
Główny Urząd Statystyczny
Tematy:
scanner data
product classification
product matching
Consumer Price Index
multilateral indices
GEKS index
Opis:
A wide variety of retailers (supermarkets, home electronics, Internet shops, etc.) provide scanner data containing information at the level of the barcode, e.g. the Global Trade Item Number (GTIN). As scanner data provide complete transaction information, we may use the expenditure shares of items as weightsfor calculating price indices at the lowest (elementary) level of data aggregation. The challenge here is the choice of the index formula which should be able to reduce chain drift bias and substitution bias. Multilateral index methods seem to be the best choice due to the dynamic character of scanner data. These indices work on a wholetime window and are transitive, which is key to the elimination of the chain drift effect. Following what is called an identity test, however, it may be expected that even when only prices return to their original values, the index becomes one. Unfortunately, the commonly used multilateral indices (GEKS, CCDI, GK, TPD, TDH) do not meet the identity test. The paper discusses the proposal of two multilateral indices and their weighted versions. On the one hand, the design of the proposed indices is based on the idea of the GEKS index. On the other hand, similarly to the Geary-Khamis method, it requires quality adjusting. It is shown that the proposed indices meet the identity test and most other tests. In an empirical and simulation study, these indices are compared with the SPQ index, which is relatively new and also meets the identity test. The analytical considerations as well as empirical studies confirm the high usefulness of the proposed indices.
Źródło:
Statistics in Transition new series; 2023, 24, 3; 151-169
1234-7655
Pojawia się w:
Statistics in Transition new series
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Podejście wielomodelowe analizy danych symbolicznych w ocenie pozycji produktów na rynku
Ensemble learning for symbolic datain product positioning
Autorzy:
Pełka, Marcin
Powiązania:
https://bibliotekanauki.pl/articles/424929.pdf
Data publikacji:
2013
Wydawca:
Wydawnictwo Uniwersytetu Ekonomicznego we Wrocławiu
Tematy:
ensemble clustering
cluster analysis of symbolic data
product positioning
Opis:
Product positioning is a wide range of business activities. Positioning is the process by which marketers try to create an image or identity in the minds of their target market for its product, brand, or organization. The main aim of the paper is to preset and apply ensemble learning for symbolic data in cluster analysis in order to evaluate a product position. Empirical part of the paper presents the application of co-occurrence matrix and bagging algorithm in ensemble learning for symbolic data (car market data was used). These two approaches reached almost the same results when considering adjusted Rand index.
Źródło:
Econometrics. Ekonometria. Advances in Applied Data Analytics; 2013, 2(40); 95-102
1507-3866
Pojawia się w:
Econometrics. Ekonometria. Advances in Applied Data Analytics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Classifiers for doubly multivariate data
Autorzy:
Krzyśko, Mirosław
Skorzybut, Michał
Wołyński, Waldemar
Powiązania:
https://bibliotekanauki.pl/articles/729872.pdf
Data publikacji:
2011
Wydawca:
Uniwersytet Zielonogórski. Wydział Matematyki, Informatyki i Ekonometrii
Tematy:
classifiers
repeated measures data (doubly multivariate data)
Kronecker product covariance structure
compound symmetry covariance structure
AR(1) covariance structure
maximum likelihood estimates
likelihood ratio tests
Opis:
This paper proposes new classifiers under the assumption of multivariate normality for multivariate repeated measures data (doubly multivariate data) with Kronecker product covariance structures. These classifiers are especially useful when the number of observations is not large enough to estimate the covariance matrices, and thus the traditional classifiers fail. The quality of these new classifiers is examined on some real data. Computational schemes for maximum likelihood estimates of required class parameters, and the likelihood ratio test relating to the structure of the covariance matrices, are also given.
Źródło:
Discussiones Mathematicae Probability and Statistics; 2011, 31, 1-2; 5-27
1509-9423
Pojawia się w:
Discussiones Mathematicae Probability and Statistics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Fundamentals of a recommendation system for the aluminum extrusion process based on data-driven modeling
Autorzy:
Perzyk, Marcin
Kochański, Andrzej
Kozłowski, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/29520062.pdf
Data publikacji:
2022
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
aluminum extrusion
advisory system
product defect
data mining
neural networks
system doradczy
wada produktu
eksploracja danych
sieci neuronowe
Opis:
The aluminum profile extrusion process is briefly characterized in the paper, together with the presentation of historical, automatically recorded data. The initial selection of the important, widely understood, process parameters was made using statistical methods such as correlation analysis for continuous and categorical (discrete) variables and ‘inverse’ ANOVA and Kruskal–Wallis methods. These selected process variables were used as inputs for MLP-type neural models with two main product defects as the numerical outputs with values 0 and 1. A multi-variant development program was applied for the neural networks and the best neural models were utilized for finding the characteristic influence of the process parameters on the product quality. The final result of the research is the basis of a recommendation system for the significant process parameters that uses a combination of information from previous cases and neural models.
Źródło:
Computer Methods in Materials Science; 2022, 22, 4; 173-188
2720-4081
2720-3948
Pojawia się w:
Computer Methods in Materials Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Artificial intelligence-based decision-making algorithms, Internet of Things sensing networks, and sustainable cyber-physical management systems in big data-driven cognitive manufacturing
Autorzy:
Lazaroiu, George
Androniceanu, Armenia
Grecu, Iulia
Grecu, Gheorghe
Neguriță, Octav
Powiązania:
https://bibliotekanauki.pl/articles/19322650.pdf
Data publikacji:
2022
Wydawca:
Instytut Badań Gospodarczych
Tematy:
cognitive manufacturing
Artificial Intelligence of Things
cyber-physical system
big data-driven deep learning
real-time scheduling algorithm
smart device
sustainable product lifecycle management
Opis:
Research background: With increasing evidence of cognitive technologies progressively integrating themselves at all levels of the manufacturing enterprises, there is an instrumental need for comprehending how cognitive manufacturing systems can provide increased value and precision in complex operational processes. Purpose of the article: In this research, prior findings were cumulated proving that cognitive manufacturing integrates artificial intelligence-based decision-making algorithms, real-time big data analytics, sustainable industrial value creation, and digitized mass production. Methods: Throughout April and June 2022, by employing Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines, a quantitative literature review of ProQuest, Scopus, and the Web of Science databases was performed, with search terms including "cognitive Industrial Internet of Things", "cognitive automation", "cognitive manufacturing systems", "cognitively-enhanced machine", "cognitive technology-driven automation", "cognitive computing technologies", and "cognitive technologies". The Systematic Review Data Repository (SRDR) was leveraged, a software program for the collecting, processing, and analysis of data for our research. The quality of the selected scholarly sources was evaluated by harnessing the Mixed Method Appraisal Tool (MMAT). AMSTAR (Assessing the Methodological Quality of Systematic Reviews) deployed artificial intelligence and intelligent workflows, and Dedoose was used for mixed methods research. VOSviewer layout algorithms and Dimensions bibliometric mapping served as data visualization tools. Findings & value added: Cognitive manufacturing systems is developed on sustainable product lifecycle management, Internet of Things-based real-time production logistics, and deep learning-assisted smart process planning, optimizing value creation capabilities and artificial intelligence-based decision-making algorithms. Subsequent interest should be oriented to how predictive maintenance can assist in cognitive manufacturing by use of artificial intelligence-based decision-making algorithms, real-time big data analytics, sustainable industrial value creation, and digitized mass production.
Źródło:
Oeconomia Copernicana; 2022, 13, 4; 1047-1080
2083-1277
Pojawia się w:
Oeconomia Copernicana
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analiza wpływu rozdzielczości danych źródłowych na jakość produktów fotogrametrycznych obiektu architektury
Analysis of source data resolution on photogrammetric products quality of architectural object
Autorzy:
Markiewicz, J.S.
Kowalczyk, M.
Podlasiak, P.
Bakuła, K.
Zawieska, D.
Bujakiewicz, A.
Andrzejewska, E.
Powiązania:
https://bibliotekanauki.pl/articles/129691.pdf
Data publikacji:
2013
Wydawca:
Stowarzyszenie Geodetów Polskich
Tematy:
integracja danych
naziemny skaning laserowy
naziemne zdjęcia cyfrowe
ortoobraz
produkt wektorowy
rozdzielczość geometryczna
data integration
close range laser scanning
terrestrial digital photos
true ortho
vector product
geometric resolution
Opis:
Wraz z szybkim rozwojem bezinwazyjnych metod pomiarowych opartych na pomiarze odległości od badanego obiektu zwiększyła się możliwość pozyskiwania danych z większą dokładnością przy jednoczesnym skróceniu czasu pomiaru. Pozwoliło to znacznie poszerzyć efektywność metod fotogrametrycznych w dokumentacji i analizie obiektów dziedzictwa kulturowego, poprzez połączenie danych z naziemnego skaningu laserowego z obrazami. Taka integracja pozwala pozyskać wymagane zwykle w tych zastosowaniach 3D modele obiektów, a także cyfrowe mapy obrazowe – ortoobrazy oraz produkty wektorowe. Jakość produktów fotogrametrycznych jest charakteryzowana zarówno ich dokładnością jak i zasobem treści, tj. liczbą i wielkością zawartych w nich detali. Jest to zawsze zależne od rozdzielczości geometrycznej danych źródłowych. Badania przedstawione w niniejszym referacie dotyczą oceny jakości dwóch produktów, obrazowego - ortoobrazu i wektorowego, wygenerowanych dla wybranych fragmentów obiektu architektonicznego. Danymi źródłowymi są chmury punktów z naziemnego skaningu laserowego oraz obrazy cyfrowe. Oba rodzaje danych zostały pozyskane w kilku rozdzielczościach. Numeryczne Modele Obiektu pozyskane z chmur punktów o różnej rozdzielczości, stanowią podstawę do orientacji zewnętrznej obrazów oraz stworzenia kilku wersji ortoobrazów o różnych rozdzielczościach. Porównanie tych produktów pomiędzy sobą pozwoli ocenić wpływ rozdzielczości danych źródłowych na ich jakość (dokładność, zasób informacji). Dodatkowa analiza zostanie przeprowadzona na podstawie porównania produktów wektorowych, pozyskanych na podstawie wektoryzacji (monoplotingu) ortoobrazów.
Due to considerable development of the non-invasion measurement technologies, taking advantages from the distance measurement, the possibility of data acquisition increased and at the same time the measurement period has been reduced. This, by combination of close range laser scanning data and images, enabled the wider expansion of photogrammetric methods effectiveness in registration and analysis of cultural heritage objects. Mentioned integration allows acquisition of objects threedimensional models and in addition digital image maps – true-ortho and vector products. The quality of photogrammetric products is defined by accuracy and the range of content, therefore by number and the minuteness of detail. That always depends on initial data geometrical resolution. The research results presented in the following paper concern the quality valuation of two products, image of true-ortho and vector data, created for selected parts of architectural object. Source data is represented by point collection in cloud, acquired from close range laser scanning and photo images. Both data collections has been acquired with diversified resolutions. The exterior orientation of images and several versions of the true-ortho are based on numeric models of the object, acquired with specified resolutions. The comparison of these products gives the opportunity to rate the influence of initial data resolution on their quality (accuracy, information volume). Additional analysis will be performed on the base of vector products comparison, acquired from monoplotting and true-ortho images. As a conclusion of experiment it was proved that geometric resolution has significant impact on the possibility of generation and on the accuracy of relative orientation TLS scans. If creation of high-resolution products is considered, scanning resolution of about 2 mm should be applied and in case of architecture details - 1 mm. It was also noted that scanning angle and object structure has significant influence on accuracy and completeness of the data. For creation of trueorthoimages for architecture purposes high-resolution ground-based images in geometry close to normal case are recommended to improve their quality. The use of grayscale true-orthoimages with values from scanner intensity is not advised. Presented research proved also that accuracy of manual and automated vectorisation results depend significantly on the resolution of the generated orthoimages (scans and images resolution) and mainly of blur effect and possible pixel size.
Źródło:
Archiwum Fotogrametrii, Kartografii i Teledetekcji; 2013, Spec.; 69-84
2083-2214
2391-9477
Pojawia się w:
Archiwum Fotogrametrii, Kartografii i Teledetekcji
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A multi-stage risk-adjusted control chart for monitoring and early-warningof products sold with two-dimensional warranty
Karta kontrolna do wieloetapowego monitorowania produktów sprzedawanych z gwarancją dwuwymiarową, z korektą ryzyka i wczesne ostrzeganie o wadach produkcyjnych na podstawie danych z reklamacji
Autorzy:
Dong, F.
Liu, Z.
Wu, Y.
Hao, J.
Powiązania:
https://bibliotekanauki.pl/articles/301029.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Polskie Naukowo-Techniczne Towarzystwo Eksploatacyjne PAN
Tematy:
two-dimensional product warranty
claims data
monitoring and early-warning
multi-stage control chart
accelerated failure model
risk adjustment
dwuwymiarowa gwarancja na produkt
dane o roszczeniach z tytułu gwarancji
monitorowanie i wczesne ostrzeganie
karta kontrolna procesu wieloetapowego
model przyspieszonego uszkodzenia
korekta ryzyka
Opis:
Warranty claims data contain valuable information about the quality and reliability of products. The monitoring and early-warning of warranty claims data are of great significance to the manufacturer by identifying and solving the emerging quality or reliability problem as soon as possible. However, though it has been used widely in the automobile industry, there are no studies that have been carried out on the monitoring and early-warning of claims data for products sold with two-dimensional warranty. In order to fill this gap, fitting the two-dimensional warranty claims data with accelerated failure model (AFT), a multi-stage riskadjusted control chart is proposed by this paper, for which a reasonable product sales tracking time and a monitoring time are suggested to reduce the influence of sales delay and fluctuating claim rates. Comparing with traditional Cumulative Sum Control Chart (CUSUM), the applicability and availability of the proposed model are demonstrated in the final.
Roszczenia gwarancyjne stanowią cenne źródło informacji na temat jakości i niezawodności produktów. Monitorowanie danych dotyczących roszczeń gwarancyjnych i wczesne ostrzeganie w oparciu o te dane ma wielkie znaczenie dla producenta, ponieważ pozwala rozpoznawać i rozwiązywać pojawiające się problemy związane z niezawodnością w jak najkrótszym czasie. Chociaż ten rodzaj monitorowania i wczesnego ostrzegania jest szeroko stosowany w przemyśle motoryzacyjnym, nie przeprowadzono dotąd żadnych badań na temat tych procesów w odniesieniu do produktów sprzedawanych z gwarancją dwuwymiarową. W celu wypełnienia tej luki, dane o reklamacjach składanych na podstawie gwarancji dwuwymiarowych dopasowano modelem uszkodzeń przyspieszonych (accelerated failure model, AFT), a następnie przedstawiono koncepcję karty kontrolnej monitorowania wieloetapowego z korektą ryzyka, dla której zaproponowano odpowiedni czas śledzenia sprzedaży produktu i czas monitorowania, mając na uwadze zmniejszenie wpływu opóźnień w sprzedaży i wahań liczby roszczeń zgłaszanych z tytułu gwarancji. Możliwości zastosowania i dostępność proponowanego modelu porównano z tradycyjną kartą sum skumulowanych.
Źródło:
Eksploatacja i Niezawodność; 2018, 20, 2; 300-307
1507-2711
Pojawia się w:
Eksploatacja i Niezawodność
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-11 z 11

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies