I am working on this project to identify eye blinks and jaw clenches using k means clustering. This is my first AI project and I am stuck on the data processing aspect.
I have 3 datasets of 10 minutes of EEG data and each second of data has 250 data points. Here is one dataset -> https://drive.google.com/file/d/1s_YEvoQRgD6CuVrPjpwFs7k01r_-LW4Y/view?usp=sharing
Does this mean that each second of data has 250 features? And how does this impact training my algorithm? Is this much data okay to train with? How can I use this dataset to have three clusters?
https://stackoverflow.com/questions/66684637/problem-with-size-of-data-dimensionality-reduction-dimensionality-of-traini March 18, 2021 at 12:08PM
没有评论:
发表评论