True accuracy of neural network The Next CEO of Stack OverflowWhat is empirical...
Where do students learn to solve polynomial equations these days?
"Eavesdropping" vs "Listen in on"
Towers in the ocean; How deep can they be built?
Is it ok to trim down a tube patch?
Do scriptures give a method to recognize a truly self-realized person/jivanmukta?
The Ultimate Number Sequence Puzzle
Why don't programming languages automatically manage the synchronous/asynchronous problem?
What are the unusually-enlarged wing sections on this P-38 Lightning?
Why is the US ranked as #45 in Press Freedom ratings, despite its extremely permissive free speech laws?
Could a dragon use its wings to swim?
What CSS properties can the br tag have?
Players Circumventing the limitations of Wish
Is there a way to save my career from absolute disaster?
Why do we say 'Un seul M' and not 'Une seule M' even though M is a "consonne"
Raspberry pi 3 B with Ubuntu 18.04 server arm64: what chip
In the "Harry Potter and the Order of the Phoenix" video game, what potion is used to sabotage Umbridge's speakers?
Can you teleport closer to a creature you are Frightened of?
Physiological effects of huge anime eyes
Is it convenient to ask the journal's editor for two additional days to complete a review?
Expressing the idea of having a very busy time
Free fall ellipse or parabola?
Won the lottery - how do I keep the money?
What does "shotgun unity" refer to here in this sentence?
Computationally populating tables with probability data
True accuracy of neural network
The Next CEO of Stack OverflowWhat is empirical accuracy?Sigmoid Function in Neural NetworkNeural Network Sigmoid ProblemNeural Network - Why use DerivativeNeural network - function estimationDerivative of softmax function in neural networkneural network resolving problemTraining and testing dataset from different source Neural networkComputing Neural Network GradientsAND logic gate in a neural networkHow is a neural mass model more computationally efficient than a neural network?
$begingroup$
My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.
If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.
$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$
The problem occurs when I have multiple classes. How could I then solve this problem?
Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.
binomial-distribution neural-networks
$endgroup$
add a comment |
$begingroup$
My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.
If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.
$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$
The problem occurs when I have multiple classes. How could I then solve this problem?
Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.
binomial-distribution neural-networks
$endgroup$
1
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09
add a comment |
$begingroup$
My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.
If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.
$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$
The problem occurs when I have multiple classes. How could I then solve this problem?
Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.
binomial-distribution neural-networks
$endgroup$
My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.
If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.
$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$
The problem occurs when I have multiple classes. How could I then solve this problem?
Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.
binomial-distribution neural-networks
binomial-distribution neural-networks
edited Mar 17 at 18:14
Daniele Tampieri
2,63221022
2,63221022
asked Mar 17 at 17:31
oezguensioezguensi
1133
1133
1
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09
add a comment |
1
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09
1
1
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09
add a comment |
0
active
oldest
votes
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151804%2ftrue-accuracy-of-neural-network%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151804%2ftrue-accuracy-of-neural-network%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09