You try them and see how well they work. Anything other than experimentation is guesswork.
You can make very broad assertions like, “convolutional neural networks work well for image recognition”, but that doesn’t really help you determine which algorithm you should use, and even in this situation there’s no possible way to know ahead of time how well a given CNN implementation will work in your particular scenario.
However, in practice this isn’t really an issue. Why? Well it’s because you don’t need the best machine learning algorithm. You need one that’s good enough. So your goal is first to define what “good enough” is. What kind of accuracy do you need for this to be effective? After you do that you should do a literature review to find which classes of algorithms are currently state of the art (if there is literature to reference, which is usually a safe bet).
Once you’ve tried the approach advocated by the existing literature there are three outcomes:
- You’re nowhere close to the accuracy you need. In this case your problem is likely currently intractable or you looked at the wrong reference literature. Sometimes when you see this you’ve done something wrong in your problem framing and you need to approach your problem differently.
- You’re close to the accuracy that you need, but you’re not there yet. In this case you should actually experiment with other algorithms, architecture variants, etc… in order to get the additional accuracy points that you need.
- You’re above the accuracy that you need. In this case just stop. You’re already good enough and additional time produces diminishing returns.
In very rare cases you’re working in a scenario where every additional accuracy point adds significant value. In this situation, the traditional approach is to continuously try new techniques, pay attention to new literature, and spend years tweaking and tuning the algorithm to squeeze out every additional point of accuracy you can.
View original question on Quora >