Proving the learnability of XOR function by a particular neural networkhow to find the input for this...

Is this Paypal Github SDK reference really a dangerous site?

Sort array by month and year

What is Tony Stark injecting into himself in Iron Man 3?

What is the purpose of a disclaimer like "this is not legal advice"?

Will the concrete slab in a partially heated shed conduct a lot of heat to the unconditioned area?

School performs periodic password audits. Is my password compromised?

Too soon for a plot twist?

Issue with units for a rocket nozzle throat area problem

Can I challenge the interviewer to give me a proper technical feedback?

Is it appropriate to ask a former professor to order a library book for me through ILL?

Insult for someone who "doesn't know anything"

How spaceships determine each other's mass in space?

Should I file my taxes? No income, unemployed, but paid 2k in student loan interest

How do you use environments that have the same name within a single latex document?

Is divide-by-zero a security vulnerability?

Tabular environment - text vertically positions itself by bottom of tikz picture in adjacent cell

Is the differential, dp, exact or not?

If nine coins are tossed, what is the probability that the number of heads is even?

Why do phishing e-mails use faked e-mail addresses instead of the real one?

How to write a chaotic neutral protagonist and prevent my readers from thinking they are evil?

How to educate team mate to take screenshots for bugs with out unwanted stuff

Why would /etc/passwd be used every time someone executes `ls -l` command?

Should I apply for my boss's promotion?

Are small insurances worth it?



Proving the learnability of XOR function by a particular neural network


how to find the input for this optimization problem?Implement multiple input XOR with d-d-1 feed-forward neural networkWhat does the hypothesis function for a simple neural network (MLP with logistic activation, single output) look like in simplified form?Derivative of slope of neural network layer activationsApproximation rates of Neural NetworksWhat do these symbols mean regarding the stable points of a Hopfield network?Why is the Error surface for a 2 input neural network with 2 weights a parabolic bowlStudy the convexity of Mean Squared Error with regularizationIs my back propagation math correct?Are neural networks with bounded parameters a compact subset of the Banach space of continuous functions?













0












$begingroup$


Let's say I have the following neural network and the constraints:




  1. The architecture is fixed (see the network in this image, I'm not allowed to post images due to low rep) (note that there are no biases)

  2. Activation function for the hidden layer is $ReLU$ ;$ReLU(x) = max(0, x)$

  3. There's no activation function for the output layer (should just return the sum of the inputs it receive).

  4. Weights are constrained to be in the set ${-1, 0, 1}$


My question is:



Can we show if the XOR function is learnable or not given the network architecture and the associated constraints?



Here's how I thought about it:



Given the XOR truth table, we can right down equations for network output for each instance. If the inputs are $X_1$ and $X_2$ the output of the network $F(X_1, X_2)$ can be written as below in its general form:



$$ReLU(X_1w_1 + X_2w_3)w_5 + ReLU(X_1w_4 + X_2w_2)w_6 = F(X_1, X_2)$$



Using the truth table combinations, we obtain:



$0,1 rightarrow 1:$
$$max(0, 0 + 1.w_3)w_5 + max(0, 0 + 1w_2)w_6 = F(0, 1) = 1$$
$$max(0, w_3)w_5 + max(0, w_2)w_6 = 1 - (1)$$



$1,0 rightarrow 1:$
$$max(0, 1.w_1 + 0)w_5 + max(0, 1w_4 + 0)w_6 = F(1, 0) = 1$$
$$max(0, w_1)w_5 + max(0, w_4)w_6 = 1 - (2)$$



$1,1 rightarrow 0:$
$$max(0, 1.w_1 + 1.w_3)w_5 + max(0, 1w_4 + 1.w_2)w_6 = F(1, 1) = 0$$
$$max(0, w_1+w_3)w_5 + max(0, w_4+w_2)w_6 = 0 - (3)$$



Can we show that the above system of equations do/do not have a solution for $w_i$ values?



Here is a similar problem on crossvalidated.










share|cite|improve this question







New contributor




j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    Let's say I have the following neural network and the constraints:




    1. The architecture is fixed (see the network in this image, I'm not allowed to post images due to low rep) (note that there are no biases)

    2. Activation function for the hidden layer is $ReLU$ ;$ReLU(x) = max(0, x)$

    3. There's no activation function for the output layer (should just return the sum of the inputs it receive).

    4. Weights are constrained to be in the set ${-1, 0, 1}$


    My question is:



    Can we show if the XOR function is learnable or not given the network architecture and the associated constraints?



    Here's how I thought about it:



    Given the XOR truth table, we can right down equations for network output for each instance. If the inputs are $X_1$ and $X_2$ the output of the network $F(X_1, X_2)$ can be written as below in its general form:



    $$ReLU(X_1w_1 + X_2w_3)w_5 + ReLU(X_1w_4 + X_2w_2)w_6 = F(X_1, X_2)$$



    Using the truth table combinations, we obtain:



    $0,1 rightarrow 1:$
    $$max(0, 0 + 1.w_3)w_5 + max(0, 0 + 1w_2)w_6 = F(0, 1) = 1$$
    $$max(0, w_3)w_5 + max(0, w_2)w_6 = 1 - (1)$$



    $1,0 rightarrow 1:$
    $$max(0, 1.w_1 + 0)w_5 + max(0, 1w_4 + 0)w_6 = F(1, 0) = 1$$
    $$max(0, w_1)w_5 + max(0, w_4)w_6 = 1 - (2)$$



    $1,1 rightarrow 0:$
    $$max(0, 1.w_1 + 1.w_3)w_5 + max(0, 1w_4 + 1.w_2)w_6 = F(1, 1) = 0$$
    $$max(0, w_1+w_3)w_5 + max(0, w_4+w_2)w_6 = 0 - (3)$$



    Can we show that the above system of equations do/do not have a solution for $w_i$ values?



    Here is a similar problem on crossvalidated.










    share|cite|improve this question







    New contributor




    j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      Let's say I have the following neural network and the constraints:




      1. The architecture is fixed (see the network in this image, I'm not allowed to post images due to low rep) (note that there are no biases)

      2. Activation function for the hidden layer is $ReLU$ ;$ReLU(x) = max(0, x)$

      3. There's no activation function for the output layer (should just return the sum of the inputs it receive).

      4. Weights are constrained to be in the set ${-1, 0, 1}$


      My question is:



      Can we show if the XOR function is learnable or not given the network architecture and the associated constraints?



      Here's how I thought about it:



      Given the XOR truth table, we can right down equations for network output for each instance. If the inputs are $X_1$ and $X_2$ the output of the network $F(X_1, X_2)$ can be written as below in its general form:



      $$ReLU(X_1w_1 + X_2w_3)w_5 + ReLU(X_1w_4 + X_2w_2)w_6 = F(X_1, X_2)$$



      Using the truth table combinations, we obtain:



      $0,1 rightarrow 1:$
      $$max(0, 0 + 1.w_3)w_5 + max(0, 0 + 1w_2)w_6 = F(0, 1) = 1$$
      $$max(0, w_3)w_5 + max(0, w_2)w_6 = 1 - (1)$$



      $1,0 rightarrow 1:$
      $$max(0, 1.w_1 + 0)w_5 + max(0, 1w_4 + 0)w_6 = F(1, 0) = 1$$
      $$max(0, w_1)w_5 + max(0, w_4)w_6 = 1 - (2)$$



      $1,1 rightarrow 0:$
      $$max(0, 1.w_1 + 1.w_3)w_5 + max(0, 1w_4 + 1.w_2)w_6 = F(1, 1) = 0$$
      $$max(0, w_1+w_3)w_5 + max(0, w_4+w_2)w_6 = 0 - (3)$$



      Can we show that the above system of equations do/do not have a solution for $w_i$ values?



      Here is a similar problem on crossvalidated.










      share|cite|improve this question







      New contributor




      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Let's say I have the following neural network and the constraints:




      1. The architecture is fixed (see the network in this image, I'm not allowed to post images due to low rep) (note that there are no biases)

      2. Activation function for the hidden layer is $ReLU$ ;$ReLU(x) = max(0, x)$

      3. There's no activation function for the output layer (should just return the sum of the inputs it receive).

      4. Weights are constrained to be in the set ${-1, 0, 1}$


      My question is:



      Can we show if the XOR function is learnable or not given the network architecture and the associated constraints?



      Here's how I thought about it:



      Given the XOR truth table, we can right down equations for network output for each instance. If the inputs are $X_1$ and $X_2$ the output of the network $F(X_1, X_2)$ can be written as below in its general form:



      $$ReLU(X_1w_1 + X_2w_3)w_5 + ReLU(X_1w_4 + X_2w_2)w_6 = F(X_1, X_2)$$



      Using the truth table combinations, we obtain:



      $0,1 rightarrow 1:$
      $$max(0, 0 + 1.w_3)w_5 + max(0, 0 + 1w_2)w_6 = F(0, 1) = 1$$
      $$max(0, w_3)w_5 + max(0, w_2)w_6 = 1 - (1)$$



      $1,0 rightarrow 1:$
      $$max(0, 1.w_1 + 0)w_5 + max(0, 1w_4 + 0)w_6 = F(1, 0) = 1$$
      $$max(0, w_1)w_5 + max(0, w_4)w_6 = 1 - (2)$$



      $1,1 rightarrow 0:$
      $$max(0, 1.w_1 + 1.w_3)w_5 + max(0, 1w_4 + 1.w_2)w_6 = F(1, 1) = 0$$
      $$max(0, w_1+w_3)w_5 + max(0, w_4+w_2)w_6 = 0 - (3)$$



      Can we show that the above system of equations do/do not have a solution for $w_i$ values?



      Here is a similar problem on crossvalidated.







      proof-writing systems-of-equations constraints neural-networks constraint-programming






      share|cite|improve this question







      New contributor




      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question







      New contributor




      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question






      New contributor




      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked yesterday









      j.Doej.Doe

      1




      1




      New contributor




      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      j.Doe is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          j.Doe is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3139356%2fproving-the-learnability-of-xor-function-by-a-particular-neural-network%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          j.Doe is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          j.Doe is a new contributor. Be nice, and check out our Code of Conduct.













          j.Doe is a new contributor. Be nice, and check out our Code of Conduct.












          j.Doe is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3139356%2fproving-the-learnability-of-xor-function-by-a-particular-neural-network%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Nidaros erkebispedøme

          Birsay

          Where did Arya get these scars? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Favourite questions and answers from the 1st quarter of 2019Why did Arya refuse to end it?Has the pronunciation of Arya Stark's name changed?Has Arya forgiven people?Why did Arya Stark lose her vision?Why can Arya still use the faces?Has the Narrow Sea become narrower?Does Arya Stark know how to make poisons outside of the House of Black and White?Why did Nymeria leave Arya?Why did Arya not kill the Lannister soldiers she encountered in the Riverlands?What is the current canonical age of Sansa, Bran and Arya Stark?