It's hard to answer this question in absolute terms and perhaps a statistician would be better equipped to answer this question than I. However, you don't need to get to a level of certainty necessary to present the analytics of your MVP to an academic review board; you only need to develop enough confidence to continue to invest in the product or develop enough confidence that your product is failing and you need to pivot. Your intuition can go a long way, and qualitative evidence may be more valuable at this stage than quantitative.
To try to create a helpful frame, your MVP should be designed to test both your product's value and growth hypotheses, and there are different considerations in measuring each.
A relatively small number of users is needed to test your product's value hypothesis, especially when using both quantitative and qualitative data. Even with as few as 10 users you can start to see stable patterns in how positively or negatively you're reacting to the product. If 8 out of 10 (not artificially selected) users are engaging with the product as you hoped, you can feel relatively confident that the value is being realized and that you should move on to optimizing the value proposition as opposed to pivoting to a new product concept.
The growth hypothesis likely takes more users to validate since you'll have to lean more heavily on statistics to know if positive experiences are driving users to perform the necessary growth inducing actions. It depends on how you're hoping to achieve growth (social media sharing, SEO?) and how large the growth coefficient needs to be to succeed.