Using Non-Negative Matrix Factorization (NNMF)Noise filtering in images with PCAHow to do Independent...
PTIJ: Aharon, King of Egypt
Rationale to prefer local variables over instance variables?
How to get the first element while continue streaming?
What is the meaning of "notice to quit at once" and "Lotty points”
Deal the cards to the players
Create chunks from an array
Where is this quote about overcoming the impossible said in "Interstellar"?
How to disable or uninstall iTunes under High Sierra without disabling SIP
Why would the IRS ask for birth certificates or even audit a small tax return?
How to kill a localhost:8080
What is better: yes / no radio, or simple checkbox?
Why are special aircraft used for the carriers in the United States Navy?
Being asked to review a paper in conference one has submitted to
How to mitigate "bandwagon attacking" from players?
Book about a time-travel war fought by computers
Misplaced tyre lever - alternatives?
How can I be pwned if I'm not registered on the compromised site?
Lock enemy's y-axis when using Vector3.MoveTowards to follow the player
Reason why dimensional travelling would be restricted
Was it really inappropriate to write a pull request for the company I interviewed with?
I can't die. Who am I?
How can neutral atoms have exactly zero electric field when there is a difference in the positions of the charges?
Caulking a corner instead of taping with joint compound?
How can I handle a player who pre-plans arguments about my rulings on RAW?
Using Non-Negative Matrix Factorization (NNMF)
Noise filtering in images with PCAHow to do Independent Component Analysis?Solving optimal control problem when input is constrainedMore efficient matrix-vector productWhy is arithmetic faster for inexact arithmetic?Finite precision - how to get rid of “near zeros”What if do NOT want Mathematica to normalize eigenvectors with Eigenvectors[N[matrix]]?LU factorizationGetting mathematica to factor out constants and apply euler's identity to a tableSpeed up construction of a dense matrixNonlinear Markov chain (numerical simulation)Solving a linear system with a badly conditioned matrixNearest non-collinear/non-coplanar points
$begingroup$
I am trying to understand NNMF (Non-Negative Matrix Factorization). This is not a built-in function in Mathematica, but there is a package that implements it, which is refered to in this post. The package is loaded by:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
The problem that NNMF tries to solve is this: given a matrix $X$, factor it as $W.H$ where $W$ and $H$ both have all positive entries.
But when I try to apply this using the package, I cannot figure out what is happening. First, construct a matrix $x$ -- I build it random, but of low rank (rank 5):
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
So you can see $x$ is 50 by 100, but is of rank only 5. Applying the NNMF command from the package:
{w, h} = GDCLS[x, 5, "MaxSteps" -> 1000];
Dimensions[w]
Dimensions[h]
So we can see that $w.h$ has the same dimensions as $x$. But
Norm[w.h - x]
is very large, so $w.h$ is not a good approximation to $x$.
Thus my questions: why doesn't NNMF seem to work? Am I expecting the wrong thing?
matrix linear-algebra
$endgroup$
add a comment |
$begingroup$
I am trying to understand NNMF (Non-Negative Matrix Factorization). This is not a built-in function in Mathematica, but there is a package that implements it, which is refered to in this post. The package is loaded by:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
The problem that NNMF tries to solve is this: given a matrix $X$, factor it as $W.H$ where $W$ and $H$ both have all positive entries.
But when I try to apply this using the package, I cannot figure out what is happening. First, construct a matrix $x$ -- I build it random, but of low rank (rank 5):
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
So you can see $x$ is 50 by 100, but is of rank only 5. Applying the NNMF command from the package:
{w, h} = GDCLS[x, 5, "MaxSteps" -> 1000];
Dimensions[w]
Dimensions[h]
So we can see that $w.h$ has the same dimensions as $x$. But
Norm[w.h - x]
is very large, so $w.h$ is not a good approximation to $x$.
Thus my questions: why doesn't NNMF seem to work? Am I expecting the wrong thing?
matrix linear-algebra
$endgroup$
4
$begingroup$
Maybex
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g.,Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns0.00326206
which is not that bad... WithMaxSteps -> 10000
, one can get down to0.00075928
or so.
$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
1
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago
add a comment |
$begingroup$
I am trying to understand NNMF (Non-Negative Matrix Factorization). This is not a built-in function in Mathematica, but there is a package that implements it, which is refered to in this post. The package is loaded by:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
The problem that NNMF tries to solve is this: given a matrix $X$, factor it as $W.H$ where $W$ and $H$ both have all positive entries.
But when I try to apply this using the package, I cannot figure out what is happening. First, construct a matrix $x$ -- I build it random, but of low rank (rank 5):
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
So you can see $x$ is 50 by 100, but is of rank only 5. Applying the NNMF command from the package:
{w, h} = GDCLS[x, 5, "MaxSteps" -> 1000];
Dimensions[w]
Dimensions[h]
So we can see that $w.h$ has the same dimensions as $x$. But
Norm[w.h - x]
is very large, so $w.h$ is not a good approximation to $x$.
Thus my questions: why doesn't NNMF seem to work? Am I expecting the wrong thing?
matrix linear-algebra
$endgroup$
I am trying to understand NNMF (Non-Negative Matrix Factorization). This is not a built-in function in Mathematica, but there is a package that implements it, which is refered to in this post. The package is loaded by:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/NonNegativeMatrixFactorization.m"]
The problem that NNMF tries to solve is this: given a matrix $X$, factor it as $W.H$ where $W$ and $H$ both have all positive entries.
But when I try to apply this using the package, I cannot figure out what is happening. First, construct a matrix $x$ -- I build it random, but of low rank (rank 5):
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
So you can see $x$ is 50 by 100, but is of rank only 5. Applying the NNMF command from the package:
{w, h} = GDCLS[x, 5, "MaxSteps" -> 1000];
Dimensions[w]
Dimensions[h]
So we can see that $w.h$ has the same dimensions as $x$. But
Norm[w.h - x]
is very large, so $w.h$ is not a good approximation to $x$.
Thus my questions: why doesn't NNMF seem to work? Am I expecting the wrong thing?
matrix linear-algebra
matrix linear-algebra
edited 1 hour ago
bill s
asked 5 hours ago
bill sbill s
53.7k376153
53.7k376153
4
$begingroup$
Maybex
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g.,Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns0.00326206
which is not that bad... WithMaxSteps -> 10000
, one can get down to0.00075928
or so.
$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
1
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago
add a comment |
4
$begingroup$
Maybex
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g.,Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns0.00326206
which is not that bad... WithMaxSteps -> 10000
, one can get down to0.00075928
or so.
$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
1
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago
4
4
$begingroup$
Maybe
x
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g., Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns 0.00326206
which is not that bad... With MaxSteps -> 10000
, one can get down to 0.00075928
or so.$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
Maybe
x
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g., Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns 0.00326206
which is not that bad... With MaxSteps -> 10000
, one can get down to 0.00075928
or so.$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
1
1
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Thank you for using that package!
The stopping criteria is based on relative precision. Find the lines:
....
normV = Norm[V, "Frobenius"]; diffNorm = 10 normV;
If[ pgoal === Automatic, pgoal = 4 ];
While[nSteps < maxSteps && TrueQ[! NumberQ[pgoal] || NumberQ[pgoal] && (normV > 0) && diffNorm/normV > 10^(-pgoal)],
nSteps++;
...
in the implementation code. Note the condition diffNorm/normV > 10^(-pgoal)
.
Here is an example based on questions code:
SeedRandom[2343]
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
(* {50, 100} *)
(* 5 *)
Options[GDCLS]
(* {"MaxSteps" -> 200, "NonNegative" -> True,
"Epsilon" -> 1.*10^-9, "RegularizationParameter" -> 0.01,
PrecisionGoal -> Automatic, "PrintProfilingInfo" -> False} *)
AbsoluteTiming[
{w, h} = GDCLS[x, 5, PrecisionGoal -> 3, "MaxSteps" -> 100000];
{Dimensions[w], Dimensions[h]}
]
(* {19.759, {{50, 5}, {5, 100}}} *)
Norm[w.h - x]/Norm[x]
(* 0.000939317 *)
$endgroup$
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192748%2fusing-non-negative-matrix-factorization-nnmf%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Thank you for using that package!
The stopping criteria is based on relative precision. Find the lines:
....
normV = Norm[V, "Frobenius"]; diffNorm = 10 normV;
If[ pgoal === Automatic, pgoal = 4 ];
While[nSteps < maxSteps && TrueQ[! NumberQ[pgoal] || NumberQ[pgoal] && (normV > 0) && diffNorm/normV > 10^(-pgoal)],
nSteps++;
...
in the implementation code. Note the condition diffNorm/normV > 10^(-pgoal)
.
Here is an example based on questions code:
SeedRandom[2343]
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
(* {50, 100} *)
(* 5 *)
Options[GDCLS]
(* {"MaxSteps" -> 200, "NonNegative" -> True,
"Epsilon" -> 1.*10^-9, "RegularizationParameter" -> 0.01,
PrecisionGoal -> Automatic, "PrintProfilingInfo" -> False} *)
AbsoluteTiming[
{w, h} = GDCLS[x, 5, PrecisionGoal -> 3, "MaxSteps" -> 100000];
{Dimensions[w], Dimensions[h]}
]
(* {19.759, {{50, 5}, {5, 100}}} *)
Norm[w.h - x]/Norm[x]
(* 0.000939317 *)
$endgroup$
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
add a comment |
$begingroup$
Thank you for using that package!
The stopping criteria is based on relative precision. Find the lines:
....
normV = Norm[V, "Frobenius"]; diffNorm = 10 normV;
If[ pgoal === Automatic, pgoal = 4 ];
While[nSteps < maxSteps && TrueQ[! NumberQ[pgoal] || NumberQ[pgoal] && (normV > 0) && diffNorm/normV > 10^(-pgoal)],
nSteps++;
...
in the implementation code. Note the condition diffNorm/normV > 10^(-pgoal)
.
Here is an example based on questions code:
SeedRandom[2343]
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
(* {50, 100} *)
(* 5 *)
Options[GDCLS]
(* {"MaxSteps" -> 200, "NonNegative" -> True,
"Epsilon" -> 1.*10^-9, "RegularizationParameter" -> 0.01,
PrecisionGoal -> Automatic, "PrintProfilingInfo" -> False} *)
AbsoluteTiming[
{w, h} = GDCLS[x, 5, PrecisionGoal -> 3, "MaxSteps" -> 100000];
{Dimensions[w], Dimensions[h]}
]
(* {19.759, {{50, 5}, {5, 100}}} *)
Norm[w.h - x]/Norm[x]
(* 0.000939317 *)
$endgroup$
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
add a comment |
$begingroup$
Thank you for using that package!
The stopping criteria is based on relative precision. Find the lines:
....
normV = Norm[V, "Frobenius"]; diffNorm = 10 normV;
If[ pgoal === Automatic, pgoal = 4 ];
While[nSteps < maxSteps && TrueQ[! NumberQ[pgoal] || NumberQ[pgoal] && (normV > 0) && diffNorm/normV > 10^(-pgoal)],
nSteps++;
...
in the implementation code. Note the condition diffNorm/normV > 10^(-pgoal)
.
Here is an example based on questions code:
SeedRandom[2343]
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
(* {50, 100} *)
(* 5 *)
Options[GDCLS]
(* {"MaxSteps" -> 200, "NonNegative" -> True,
"Epsilon" -> 1.*10^-9, "RegularizationParameter" -> 0.01,
PrecisionGoal -> Automatic, "PrintProfilingInfo" -> False} *)
AbsoluteTiming[
{w, h} = GDCLS[x, 5, PrecisionGoal -> 3, "MaxSteps" -> 100000];
{Dimensions[w], Dimensions[h]}
]
(* {19.759, {{50, 5}, {5, 100}}} *)
Norm[w.h - x]/Norm[x]
(* 0.000939317 *)
$endgroup$
Thank you for using that package!
The stopping criteria is based on relative precision. Find the lines:
....
normV = Norm[V, "Frobenius"]; diffNorm = 10 normV;
If[ pgoal === Automatic, pgoal = 4 ];
While[nSteps < maxSteps && TrueQ[! NumberQ[pgoal] || NumberQ[pgoal] && (normV > 0) && diffNorm/normV > 10^(-pgoal)],
nSteps++;
...
in the implementation code. Note the condition diffNorm/normV > 10^(-pgoal)
.
Here is an example based on questions code:
SeedRandom[2343]
xKer = RandomInteger[{0, 10}, {5, 5}];
xL = RandomInteger[{0, 10}, {50, 5}];
xR = RandomInteger[{0, 10}, {5, 100}];
x = xL.xKer.xR;
Dimensions[x]
MatrixRank[x]
(* {50, 100} *)
(* 5 *)
Options[GDCLS]
(* {"MaxSteps" -> 200, "NonNegative" -> True,
"Epsilon" -> 1.*10^-9, "RegularizationParameter" -> 0.01,
PrecisionGoal -> Automatic, "PrintProfilingInfo" -> False} *)
AbsoluteTiming[
{w, h} = GDCLS[x, 5, PrecisionGoal -> 3, "MaxSteps" -> 100000];
{Dimensions[w], Dimensions[h]}
]
(* {19.759, {{50, 5}, {5, 100}}} *)
Norm[w.h - x]/Norm[x]
(* 0.000939317 *)
edited 4 hours ago
answered 4 hours ago
Anton AntonovAnton Antonov
23.9k167114
23.9k167114
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
add a comment |
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
This algorithm seems to be a bit slow. 100000 iterations is quite a lot. I am pretty sure that one can make a semi-smooth Newton method with much higher convergence rate work for the underlying optimization problem. If you like, I can elaborate on this. Are you interested?
$endgroup$
– Henrik Schumacher
2 hours ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
Thanks for writing the package! And of course, thanks also for helping me understand how to use it. I am hoping to replace some SVD calculations with NNMF.
$endgroup$
– bill s
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@bills Thanks, good to hear! You might be also interested in Independent Component Analysis (ICA) discussed (together with NNMF) in MSE's question "How to do Independent Component Analysis?", and the Community posts "Independent component analysis for multidimensional signals" and "Comparison of dimension reduction algorithms over mandala images generation".
$endgroup$
– Anton Antonov
1 hour ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
$begingroup$
@HenrikSchumacher 1) Yes, this NNMF implementation is slow and NNMF should be fast (enough) since it is usually run several times, since NNMF is prone to go into local minima. 2) "100000 iterations is quite a lot." -- I mostly use NNMF to NLP (topic extraction) and I rarely run NNMF more than 12-20 steps. 3) Of course, all improvement suggestions are welcome. I would say, it would be best if you write a package and post it in GitHub.
$endgroup$
– Anton Antonov
47 mins ago
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f192748%2fusing-non-negative-matrix-factorization-nnmf%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
$begingroup$
Maybe
x
simply cannot be factored this way? Moreover, it is more realistic to condsider a relative error measure. E.g.,Norm[w.h - x, "Frobenius"]/Norm[x, "Frobenius"]
returns0.00326206
which is not that bad... WithMaxSteps -> 10000
, one can get down to0.00075928
or so.$endgroup$
– Henrik Schumacher
4 hours ago
$begingroup$
If you create x = xL.xR then it for sure can be expressed as w.h, and there is still significant error in the Norm. But maybe you are right, the error is small compared to the size of x.
$endgroup$
– bill s
4 hours ago
1
$begingroup$
@HenrikSchumacher beat me to it! (BTW, the automatic precision goal is 4.)
$endgroup$
– Anton Antonov
4 hours ago