True accuracy of neural network The Next CEO of Stack OverflowWhat is empirical...

Where do students learn to solve polynomial equations these days?

"Eavesdropping" vs "Listen in on"

Towers in the ocean; How deep can they be built?

Is it ok to trim down a tube patch?

Do scriptures give a method to recognize a truly self-realized person/jivanmukta?

The Ultimate Number Sequence Puzzle

Why don't programming languages automatically manage the synchronous/asynchronous problem?

What are the unusually-enlarged wing sections on this P-38 Lightning?

Why is the US ranked as #45 in Press Freedom ratings, despite its extremely permissive free speech laws?

Could a dragon use its wings to swim?

What CSS properties can the br tag have?

Players Circumventing the limitations of Wish

Is there a way to save my career from absolute disaster?

Why do we say 'Un seul M' and not 'Une seule M' even though M is a "consonne"

Raspberry pi 3 B with Ubuntu 18.04 server arm64: what chip

In the "Harry Potter and the Order of the Phoenix" video game, what potion is used to sabotage Umbridge's speakers?

Can you teleport closer to a creature you are Frightened of?

Physiological effects of huge anime eyes

Is it convenient to ask the journal's editor for two additional days to complete a review?

Expressing the idea of having a very busy time

Free fall ellipse or parabola?

Won the lottery - how do I keep the money?

What does "shotgun unity" refer to here in this sentence?

Computationally populating tables with probability data



True accuracy of neural network



The Next CEO of Stack OverflowWhat is empirical accuracy?Sigmoid Function in Neural NetworkNeural Network Sigmoid ProblemNeural Network - Why use DerivativeNeural network - function estimationDerivative of softmax function in neural networkneural network resolving problemTraining and testing dataset from different source Neural networkComputing Neural Network GradientsAND logic gate in a neural networkHow is a neural mass model more computationally efficient than a neural network?












0












$begingroup$


My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.



If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.



$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$




  1. The problem occurs when I have multiple classes. How could I then solve this problem?


  2. Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.











share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    (1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
    $endgroup$
    – user3658307
    Mar 24 at 15:09
















0












$begingroup$


My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.



If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.



$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$




  1. The problem occurs when I have multiple classes. How could I then solve this problem?


  2. Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.











share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    (1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
    $endgroup$
    – user3658307
    Mar 24 at 15:09














0












0








0


0



$begingroup$


My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.



If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.



$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$




  1. The problem occurs when I have multiple classes. How could I then solve this problem?


  2. Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.











share|cite|improve this question











$endgroup$




My goal is to calculate the probability to correctly classify an object if I make $k$ predictions on slightly different images of it. The predicted class would then be the one that was predicted the most.



If I would only have two classes I think I could just use a binomial distribution and set $x= tfrac{k}{2} + 1 $ so that more than half of the time the correct class was predicted.



$$
Pleft(Xgeq tfrac{k}{2} + 1right) = binom{k}{frac{k}{2} + 1} cdot p^{frac{k}{2} + 1} cdot (1-p)^{frac{k}{2}}
$$




  1. The problem occurs when I have multiple classes. How could I then solve this problem?


  2. Also, following the answer given to this question I am unsure if I can use the empirical accuracy obtained by testing the model's accuracy on a test data set or if I additionally need to regard the true accuracy.








binomial-distribution neural-networks






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 17 at 18:14









Daniele Tampieri

2,63221022




2,63221022










asked Mar 17 at 17:31









oezguensioezguensi

1133




1133








  • 1




    $begingroup$
    (1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
    $endgroup$
    – user3658307
    Mar 24 at 15:09














  • 1




    $begingroup$
    (1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
    $endgroup$
    – user3658307
    Mar 24 at 15:09








1




1




$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09




$begingroup$
(1) Why not use a one-hot softmax classifier, run it $k$ times, and take the argmax of the average output over the $k$ runs? (2) the empirical test accuracy is usually the best estimate you can get of the true accuracy. How would you get the true accuracy? You would need infinite data. Theoretically you can use something like PAC bounds to see the relation between the two.
$endgroup$
– user3658307
Mar 24 at 15:09










0






active

oldest

votes












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151804%2ftrue-accuracy-of-neural-network%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151804%2ftrue-accuracy-of-neural-network%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Nidaros erkebispedøme

Birsay

Was Woodrow Wilson really a Liberal?Was World War I a war of liberals against authoritarians?Founding Fathers...