Foundations and Trends in Signal Processing Ser.: Signal Decomposition Using Masked Proximal Operators by Bennet E. Meyers and Stephen P. Boyd (2023, Trade Paperback)

Great Book Prices Store (339650)
96.8% positive Feedback
Price:
US $70.95
Approximately£52.43
+ $19.99 postage
Estimated delivery Mon, 4 Aug - Mon, 18 Aug
Returns:
14 days return. Buyer pays for return postage. If you use an eBay delivery label, it will be deducted from your refund amount.
Condition:
New
Signal Decomposition Using Masked Proximal Operators, ISBN 1638281025, ISBN-13 9781638281023, Brand New, Free shipping in the US

About this product

Product Identifiers

PublisherNow Publishers
ISBN-101638281025
ISBN-139781638281023
eBay Product ID (ePID)8058636454

Product Key Features

Number of Pages92 Pages
LanguageEnglish
Publication NameSignal Decomposition Using Masked Proximal Operators
SubjectEngineering (General), Signals & Signal Processing
Publication Year2023
TypeTextbook
AuthorBennet E. Meyers, Stephen P. Boyd
Subject AreaTechnology & Engineering
SeriesFoundations and Trends in Signal Processing Ser.
FormatTrade Paperback

Dimensions

Item Weight5 Oz
Item Length9.2 in
Item Width6.1 in

Additional Product Features

Intended AudienceScholarly & Professional
Dewey Edition23
IllustratedYes
Dewey Decimal621.3822
Table Of Content1. Introduction2. Signal Decomposition3. Background and Related Work4. Solution Methods5. Component Class Attributes6. Component Class Examples7. ExamplesAcknowledgementsAppendixReferences
SynopsisThe decomposition of a time series signal into components is an age old problem, with many different approaches proposed, including traditional filtering and smoothing, seasonal-trend decomposition, Fourier and other decompositions, PCA and newer variants such as nonnegative matrix factorization, various statistical methods, and many heuristic methods. In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered. A general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints) are included. When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, two distributed optimization methods for computing the decomposition are presented, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e. , handle each component separately. Also included are tractable methods for evaluating the masked proximal operators of some loss functions that have not appeared in the literature., The decomposition of a time series signal into components is an age old problem, with many different approaches proposed, including traditional filtering and smoothing, seasonal-trend decomposition, Fourier and other decompositions, PCA and newer variants such as nonnegative matrix factorization, various statistical methods, and many heuristic methods.In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered. A general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints) are included. When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases.Summarizing and clarifying prior results, two distributed optimization methods for computing the decomposition are presented, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. Also included are tractable methods for evaluating the masked proximal operators of some loss functions that have not appeared in the literature., In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered.
LC Classification NumberTK5102.9

All listings for this product

Buy it now
Any condition
New
Pre-owned
No ratings or reviews yet
Be the first to write a review