Can I have $Q = R = I$ as covariance matrices for a kalman filter? Announcing the arrival of...

Selecting user stories during sprint planning

How much damage would a cupful of neutron star matter do to the Earth?

How do I use the new nonlinear finite element in Mathematica 12 for this equation?

ArcGIS Pro Python arcpy.CreatePersonalGDB_management

Can the Great Weapon Master feat's damage bonus and accuracy penalty apply to attacks from the Spiritual Weapon spell?

Chinese Seal on silk painting - what does it mean?

Should I use a zero-interest credit card for a large one-time purchase?

What is the appropriate index architecture when forced to implement IsDeleted (soft deletes)?

How to compare two different files line by line in unix?

What is the meaning of 'breadth' in breadth first search?

Why weren't discrete x86 CPUs ever used in game hardware?

Denied boarding although I have proper visa and documentation. To whom should I make a complaint?

What does it mean that physics no longer uses mechanical models to describe phenomena?

Trademark violation for app?

Crossing US/Canada Border for less than 24 hours

Why should I vote and accept answers?

Is there hard evidence that the grant peer review system performs significantly better than random?

What are the diatonic extended chords of C major?

Is grep documentation about ignoring case wrong, since it doesn't ignore case in filenames?

Dating a Former Employee

Most bit efficient text communication method?

Did Deadpool rescue all of the X-Force?

Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?

Converted a Scalar function to a TVF function for parallel execution-Still running in Serial mode



Can I have $Q = R = I$ as covariance matrices for a kalman filter?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)Partial differentiation of vector to find Jacobian (extended Kalman filter)Kalman Filter to determine position and attitude from 6DOF IMU (accelerometer + gyroscope)How to estimate variances for Kalman filter from real sensor measurements without underestimating process noise.How to handle the noise covariance matrices in a basic Kalman Filter setup?Extended Kalman Filter destablizingHow to determine the transition probability in Sequential Importance Sampling (SIS) for Particle FilterKalman filter using accelerometer and system dyanamical modelDo I understand these expressions correctly (Kalman filter)?How to obtain kalman filter?LQG with bias rejection for quadcopter attitude control












0












$begingroup$


Assume that we have no noise in our system. We using a low pass filter to filer away some peaks in the measurements.



But our goal is just to estimate the state $X_k$. Can we set the $Q_k$ and $R$ to the identity matrix $I$? Or do we need to compute $Q_k$ and $R$?



We can assume that we have no noise from our process.



We know $A, B, C, X_0, P_0, U_k, Y_k, H$ but not $Z_k, W_k$



enter image description here










share|cite|improve this question











$endgroup$












  • $begingroup$
    Why do you have to filter out peaks if there is no noise acting on the system?
    $endgroup$
    – Kwin van der Veen
    Mar 25 at 13:10










  • $begingroup$
    @KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
    $endgroup$
    – Daniel Mårtensson
    Mar 25 at 14:12










  • $begingroup$
    Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
    $endgroup$
    – mark leeds
    Mar 26 at 5:30












  • $begingroup$
    @markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
    $endgroup$
    – Daniel Mårtensson
    Mar 26 at 13:24










  • $begingroup$
    Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
    $endgroup$
    – mark leeds
    Mar 27 at 16:59
















0












$begingroup$


Assume that we have no noise in our system. We using a low pass filter to filer away some peaks in the measurements.



But our goal is just to estimate the state $X_k$. Can we set the $Q_k$ and $R$ to the identity matrix $I$? Or do we need to compute $Q_k$ and $R$?



We can assume that we have no noise from our process.



We know $A, B, C, X_0, P_0, U_k, Y_k, H$ but not $Z_k, W_k$



enter image description here










share|cite|improve this question











$endgroup$












  • $begingroup$
    Why do you have to filter out peaks if there is no noise acting on the system?
    $endgroup$
    – Kwin van der Veen
    Mar 25 at 13:10










  • $begingroup$
    @KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
    $endgroup$
    – Daniel Mårtensson
    Mar 25 at 14:12










  • $begingroup$
    Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
    $endgroup$
    – mark leeds
    Mar 26 at 5:30












  • $begingroup$
    @markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
    $endgroup$
    – Daniel Mårtensson
    Mar 26 at 13:24










  • $begingroup$
    Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
    $endgroup$
    – mark leeds
    Mar 27 at 16:59














0












0








0





$begingroup$


Assume that we have no noise in our system. We using a low pass filter to filer away some peaks in the measurements.



But our goal is just to estimate the state $X_k$. Can we set the $Q_k$ and $R$ to the identity matrix $I$? Or do we need to compute $Q_k$ and $R$?



We can assume that we have no noise from our process.



We know $A, B, C, X_0, P_0, U_k, Y_k, H$ but not $Z_k, W_k$



enter image description here










share|cite|improve this question











$endgroup$




Assume that we have no noise in our system. We using a low pass filter to filer away some peaks in the measurements.



But our goal is just to estimate the state $X_k$. Can we set the $Q_k$ and $R$ to the identity matrix $I$? Or do we need to compute $Q_k$ and $R$?



We can assume that we have no noise from our process.



We know $A, B, C, X_0, P_0, U_k, Y_k, H$ but not $Z_k, W_k$



enter image description here







control-theory optimal-control linear-control kalman-filter






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 25 at 10:37







Daniel Mårtensson

















asked Mar 25 at 10:10









Daniel MårtenssonDaniel Mårtensson

995419




995419












  • $begingroup$
    Why do you have to filter out peaks if there is no noise acting on the system?
    $endgroup$
    – Kwin van der Veen
    Mar 25 at 13:10










  • $begingroup$
    @KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
    $endgroup$
    – Daniel Mårtensson
    Mar 25 at 14:12










  • $begingroup$
    Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
    $endgroup$
    – mark leeds
    Mar 26 at 5:30












  • $begingroup$
    @markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
    $endgroup$
    – Daniel Mårtensson
    Mar 26 at 13:24










  • $begingroup$
    Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
    $endgroup$
    – mark leeds
    Mar 27 at 16:59


















  • $begingroup$
    Why do you have to filter out peaks if there is no noise acting on the system?
    $endgroup$
    – Kwin van der Veen
    Mar 25 at 13:10










  • $begingroup$
    @KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
    $endgroup$
    – Daniel Mårtensson
    Mar 25 at 14:12










  • $begingroup$
    Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
    $endgroup$
    – mark leeds
    Mar 26 at 5:30












  • $begingroup$
    @markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
    $endgroup$
    – Daniel Mårtensson
    Mar 26 at 13:24










  • $begingroup$
    Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
    $endgroup$
    – mark leeds
    Mar 27 at 16:59
















$begingroup$
Why do you have to filter out peaks if there is no noise acting on the system?
$endgroup$
– Kwin van der Veen
Mar 25 at 13:10




$begingroup$
Why do you have to filter out peaks if there is no noise acting on the system?
$endgroup$
– Kwin van der Veen
Mar 25 at 13:10












$begingroup$
@KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
$endgroup$
– Daniel Mårtensson
Mar 25 at 14:12




$begingroup$
@KwinvanderVeen It can be disturbance from the microcontroller. I know that $R$ can be found by $R = cov(Y_k)$ when $Y_k$ is steady state. But how about $Q$?
$endgroup$
– Daniel Mårtensson
Mar 25 at 14:12












$begingroup$
Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
$endgroup$
– mark leeds
Mar 26 at 5:30






$begingroup$
Q is the prior variance ( or sometimes they call it the initial variance ) of the state. If you know everything else, it can be estimated using prediction error decomposition but that's a time-series methodology. ( see Andrew Harvey's blue book ) .I'm not familiar with how they would do it in control field which could be something totally different.
$endgroup$
– mark leeds
Mar 26 at 5:30














$begingroup$
@markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
$endgroup$
– Daniel Mårtensson
Mar 26 at 13:24




$begingroup$
@markleeds it's very difficult to find Q in reality for an unknow process ? Can I then set $Q=0$?
$endgroup$
– Daniel Mårtensson
Mar 26 at 13:24












$begingroup$
Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
$endgroup$
– mark leeds
Mar 27 at 16:59




$begingroup$
Hi: I would think, even in the control field, you should give it a prior ( in bayesian framework ) or estimate it ( classical framework ). Setting it to zero says that the state has no prior variance so, no, I don't think that's the approach to take. Hopefully someone else can chime in. But check out Harvey's blue book because, if everything else is known, it's not that hard to estimate it.
$endgroup$
– mark leeds
Mar 27 at 16:59










1 Answer
1






active

oldest

votes


















1












$begingroup$

It sounds like you have a common misconception when it comes to the Kalman filter.



If a) your process is exactly linear, b) you know the coefficients exactly, c) you know the initial state exactly, and d) you know the inputs exactly, then you have no need of a Kalman filter, because you can use the equation $x_{k+1} = A_kx_k + B_ku_k$ to compute the state for all $k$. If you know everything except the initial condition exactly, then you can just use a deterministic observer.



On the other hand, if, as is the case 99% of the time, you don't know one or more of these things exactly, then process noise is your friend.



Why?



Because process noise compensates for model/initial condition/input uncertainty and nonlinearity by telling the Kalman filter to downweight a priori estimates. This means it will rely less on the model and more on the data. Even if the data is noisy, you want to incorporate it because the model is unreliable. The beauty of the Kalman filter is that it gives you the ability to use the best of noisy data and the best of an unreliable model, so you can make a better state estimate than was possible with just one or the other.



Filter tuning is essentially a field of black magic and I've known several engineers who have made their careers doing this alone. Usually using the sensor variances and covariances is good enough for the measurement noise covariance. For the process noise covariance, I usually start out with a constant times the identity. I vary the diagonal entries until I have the right weighting between the states, and I vary the constant in front until I have the right mix of information between the model and the data going into the the estimate.



Finally, there is work on systematic filter tuning for certain applications, but it's not much help unless you're working on one of those applications.



EDIT 4/17/19:



I now realize I didn't answer the question as the questioner asked but rather what the questioner probably meant. To assist people who are actually wondering if you can put $R = Q = I$ for the process and measurement covariances, the answer is yes, since the identity is positive definite. However, this does NOT correspond to having no process or measurement noise, but rather to having process and measurement noise components follow a standard normal distribution. Since different components are scaled differently, this will usually not be a good tuning.






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3161599%2fcan-i-have-q-r-i-as-covariance-matrices-for-a-kalman-filter%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    It sounds like you have a common misconception when it comes to the Kalman filter.



    If a) your process is exactly linear, b) you know the coefficients exactly, c) you know the initial state exactly, and d) you know the inputs exactly, then you have no need of a Kalman filter, because you can use the equation $x_{k+1} = A_kx_k + B_ku_k$ to compute the state for all $k$. If you know everything except the initial condition exactly, then you can just use a deterministic observer.



    On the other hand, if, as is the case 99% of the time, you don't know one or more of these things exactly, then process noise is your friend.



    Why?



    Because process noise compensates for model/initial condition/input uncertainty and nonlinearity by telling the Kalman filter to downweight a priori estimates. This means it will rely less on the model and more on the data. Even if the data is noisy, you want to incorporate it because the model is unreliable. The beauty of the Kalman filter is that it gives you the ability to use the best of noisy data and the best of an unreliable model, so you can make a better state estimate than was possible with just one or the other.



    Filter tuning is essentially a field of black magic and I've known several engineers who have made their careers doing this alone. Usually using the sensor variances and covariances is good enough for the measurement noise covariance. For the process noise covariance, I usually start out with a constant times the identity. I vary the diagonal entries until I have the right weighting between the states, and I vary the constant in front until I have the right mix of information between the model and the data going into the the estimate.



    Finally, there is work on systematic filter tuning for certain applications, but it's not much help unless you're working on one of those applications.



    EDIT 4/17/19:



    I now realize I didn't answer the question as the questioner asked but rather what the questioner probably meant. To assist people who are actually wondering if you can put $R = Q = I$ for the process and measurement covariances, the answer is yes, since the identity is positive definite. However, this does NOT correspond to having no process or measurement noise, but rather to having process and measurement noise components follow a standard normal distribution. Since different components are scaled differently, this will usually not be a good tuning.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      It sounds like you have a common misconception when it comes to the Kalman filter.



      If a) your process is exactly linear, b) you know the coefficients exactly, c) you know the initial state exactly, and d) you know the inputs exactly, then you have no need of a Kalman filter, because you can use the equation $x_{k+1} = A_kx_k + B_ku_k$ to compute the state for all $k$. If you know everything except the initial condition exactly, then you can just use a deterministic observer.



      On the other hand, if, as is the case 99% of the time, you don't know one or more of these things exactly, then process noise is your friend.



      Why?



      Because process noise compensates for model/initial condition/input uncertainty and nonlinearity by telling the Kalman filter to downweight a priori estimates. This means it will rely less on the model and more on the data. Even if the data is noisy, you want to incorporate it because the model is unreliable. The beauty of the Kalman filter is that it gives you the ability to use the best of noisy data and the best of an unreliable model, so you can make a better state estimate than was possible with just one or the other.



      Filter tuning is essentially a field of black magic and I've known several engineers who have made their careers doing this alone. Usually using the sensor variances and covariances is good enough for the measurement noise covariance. For the process noise covariance, I usually start out with a constant times the identity. I vary the diagonal entries until I have the right weighting between the states, and I vary the constant in front until I have the right mix of information between the model and the data going into the the estimate.



      Finally, there is work on systematic filter tuning for certain applications, but it's not much help unless you're working on one of those applications.



      EDIT 4/17/19:



      I now realize I didn't answer the question as the questioner asked but rather what the questioner probably meant. To assist people who are actually wondering if you can put $R = Q = I$ for the process and measurement covariances, the answer is yes, since the identity is positive definite. However, this does NOT correspond to having no process or measurement noise, but rather to having process and measurement noise components follow a standard normal distribution. Since different components are scaled differently, this will usually not be a good tuning.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        It sounds like you have a common misconception when it comes to the Kalman filter.



        If a) your process is exactly linear, b) you know the coefficients exactly, c) you know the initial state exactly, and d) you know the inputs exactly, then you have no need of a Kalman filter, because you can use the equation $x_{k+1} = A_kx_k + B_ku_k$ to compute the state for all $k$. If you know everything except the initial condition exactly, then you can just use a deterministic observer.



        On the other hand, if, as is the case 99% of the time, you don't know one or more of these things exactly, then process noise is your friend.



        Why?



        Because process noise compensates for model/initial condition/input uncertainty and nonlinearity by telling the Kalman filter to downweight a priori estimates. This means it will rely less on the model and more on the data. Even if the data is noisy, you want to incorporate it because the model is unreliable. The beauty of the Kalman filter is that it gives you the ability to use the best of noisy data and the best of an unreliable model, so you can make a better state estimate than was possible with just one or the other.



        Filter tuning is essentially a field of black magic and I've known several engineers who have made their careers doing this alone. Usually using the sensor variances and covariances is good enough for the measurement noise covariance. For the process noise covariance, I usually start out with a constant times the identity. I vary the diagonal entries until I have the right weighting between the states, and I vary the constant in front until I have the right mix of information between the model and the data going into the the estimate.



        Finally, there is work on systematic filter tuning for certain applications, but it's not much help unless you're working on one of those applications.



        EDIT 4/17/19:



        I now realize I didn't answer the question as the questioner asked but rather what the questioner probably meant. To assist people who are actually wondering if you can put $R = Q = I$ for the process and measurement covariances, the answer is yes, since the identity is positive definite. However, this does NOT correspond to having no process or measurement noise, but rather to having process and measurement noise components follow a standard normal distribution. Since different components are scaled differently, this will usually not be a good tuning.






        share|cite|improve this answer











        $endgroup$



        It sounds like you have a common misconception when it comes to the Kalman filter.



        If a) your process is exactly linear, b) you know the coefficients exactly, c) you know the initial state exactly, and d) you know the inputs exactly, then you have no need of a Kalman filter, because you can use the equation $x_{k+1} = A_kx_k + B_ku_k$ to compute the state for all $k$. If you know everything except the initial condition exactly, then you can just use a deterministic observer.



        On the other hand, if, as is the case 99% of the time, you don't know one or more of these things exactly, then process noise is your friend.



        Why?



        Because process noise compensates for model/initial condition/input uncertainty and nonlinearity by telling the Kalman filter to downweight a priori estimates. This means it will rely less on the model and more on the data. Even if the data is noisy, you want to incorporate it because the model is unreliable. The beauty of the Kalman filter is that it gives you the ability to use the best of noisy data and the best of an unreliable model, so you can make a better state estimate than was possible with just one or the other.



        Filter tuning is essentially a field of black magic and I've known several engineers who have made their careers doing this alone. Usually using the sensor variances and covariances is good enough for the measurement noise covariance. For the process noise covariance, I usually start out with a constant times the identity. I vary the diagonal entries until I have the right weighting between the states, and I vary the constant in front until I have the right mix of information between the model and the data going into the the estimate.



        Finally, there is work on systematic filter tuning for certain applications, but it's not much help unless you're working on one of those applications.



        EDIT 4/17/19:



        I now realize I didn't answer the question as the questioner asked but rather what the questioner probably meant. To assist people who are actually wondering if you can put $R = Q = I$ for the process and measurement covariances, the answer is yes, since the identity is positive definite. However, this does NOT correspond to having no process or measurement noise, but rather to having process and measurement noise components follow a standard normal distribution. Since different components are scaled differently, this will usually not be a good tuning.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited yesterday

























        answered 2 days ago









        SZNSZN

        2,980720




        2,980720






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3161599%2fcan-i-have-q-r-i-as-covariance-matrices-for-a-kalman-filter%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Nidaros erkebispedøme

            Birsay

            Was Woodrow Wilson really a Liberal?Was World War I a war of liberals against authoritarians?Founding Fathers...