MAKHFI.COM
Fascinating World of Neural Nets  
Neural Network Forums
Home PageHome Page : General Neural Networks Topics : A Simple Doubt
  You are currently not logged in. You can view the forums, but cannot post messages. | HOME | Log In | Register | Search | Help
Post a Reply on This Topic Post a Reply on This Topic

Author Topic: A Simple Doubt
jaime_ap Posted: 28-Feb-11 01:55
  Edit Edit
 
Email the Author Mail   View Author's Profile Profile  
Heya all, first post here, looked around and im pretty sure it isnt covered already, anyway it is a pretty simple concept I am hoping someone can verify for me.

I am working with an offline NN for pattern recognition that uses a large set of "examples" as training.

Of these examples, over 95% of them come from real life, highly randomized events, while about 5% of them come from much less random events generated by us (research team).

I am currently using some 2k examples for the training process but have access to a much MUCH larger sample database, again most of which comes from real life random events.

My question is, if I was to train the NN with the much larger sample database do i run the risk of overtraining the NN despite the fact that the examples are as unbiased as is practically possible? or is it more likely to help the NN provide better results?
 

Post a Reply on This Topic Post a Reply on This Topic
 

Copyright © 2001-2003 Pejman Makhfi. All rights Reserved.