All experiments are programmed to end when they reach statistical significance. This means that the results you had on the test aren’t due to pure chance, and will behave the same way with a bigger audience. There’s no fixed number for reaching statistical significance, and it mainly depends on the performance of the test and the confidence level you want to have on results. If the Conversion Rate difference between variations is huge, the test won’t need that much traffic to complete (we’ve seen tests completing with 500 Unique Visitors). Now, if the difference on Conversion Rate between variations is too little, the amount of traffic needed for the tool to clearly identify a winner, will be substantially bigger.
About the confidence level, you can also understand it as the margin of error you are willing to assume. The tests are automatically setted at 90% Confidence Level (10% margin of error). If you pretend to accept as little error as possible (Confidence Level of 99%), the traffic needed will increase exponentially. The opposite if you decide to give bigger room for error. We allow you to go down up to 80% Confidence Level (20% margin of error), because we understand that making that number any lower will impact negatively on your results.
To change the Confidence Level on your experiments, you can go to Settings > Auto-settings
You'll be able to set different confidence levels for your different sites. We recommend a higher confidence level on sites that have good traffic, and a lower one on sites with less traffic. Try to find the sweet spot for each of your websites.