Eeglab event file
ICA has a known bias toward high amplitude. If the data length were infinite, it would not have this bias--my former colleague told me so.
This simply means that ICA has less things to learn from high frequency by the way, ASR applies inverse EEG-PSD filter so that signals in those high frequency ranges, which are unlikely to be dominated by natural EEG, is enhanced so that it is detected for correction. If that's the case, why not cutting the high frequency from the beginning, at least for ICA purpose--that's the rational for this process. If you doubt it, you can anytime verify it comparing ICA results obtained from using Hz sampled data with that from using Hz downsampled data.
It is even possible that due to the band-limiting effect Hz downsampled data is Hz low-pass filtered because of anti-aliasing , ICA result could be even better. Does it negatively impact ICA's performance, since there is much less data available? This is something a simulation study can answer empirically, but the empirical answer is no.
I have been applying this downsampling-toHz approach as ICA preprocessing over several thousands of datasets, in some of which the data length became shorter than the above recommendation but I did not see any impact.
So, the length in time in the real world counts rather than frames. For example, 5-min data can be 30, frames when sampled at Hz, and , frames when sampled at Hz. Does the latter help ICA decomposition quality? It does not. It depends on how much information scalp-recorded EEG contains above Hz compared with below Hz.
This view is supported by the following description taken from Nunez and Srinivasan p. That is, we expect minimal changes in the mesosource function P r,t over mesoscopic "relaxation times" in the range of 10 ms or perhaps several tens of milliseconds. During the 30th EEGLAB workshop, one of the workshop attendees Tjerk Gutteling told us that by commenting out the lines for 'drawnow' in runamica speeds up ICA by several times yes, several times, not several percent.
I immediately tested it and confirmed that was true. Kudos to Tjerk! What a hacker. Even though I don't exactly know how the conflict happens, to avoid this issue, most likely one can simply comment out 'drownow;' lines as suggested above.
Here is my warning--you may struggle to install this solution. How much speed does it gain compared with runica? Using no extended infomax and the initial learning rate 0. Using the double precision is for numerical stability, though GPU boards usually works much more efficiently for the single precision data than the double precision data. Raimondo et al. My result was not that dramatic probably, Tjerk Gutteling's 'drawnow' trick already addressed times of speed up for runica but still this is significant.
In the case of John Ryzen 7 x, GTXti , x15 boost even after addressing the 'drawnow' slowing issue, which makes a huge difference. The most recent information in our community about how to install it to Windows machine was reported by Ugo Bruzadin Nunes and archived here for Linux, see this article by Alejandro Ojeda, a former SCCN engineer.
Let me share the steps I took based on Ugo's suggestions and provide some update. Here is a general tip for further speed increase. Using 'extended', 0 increases the processing speed significantly. You might be concerned if ICA's performance is worsened. But if you think about what kind of signal sources have sub-Gaussian distributions, which 'extended', 1 can specifically capture, probably you don't miss it at all.
This is an example to write ERP i. This solution is for Steve Wu, and also could have been for Kelly Michaelis, although I took a different approach in but seems apparent that this solution was more straightforward than hacking the STUDY structure. The whole point is that you can calculate ERSP with flexibly specified baseline period even if there are duration-varying events between the two conditions.
Calculation of of the pseudo- wavelet transform is calculated in the function timefreq. It takes place inside the double for-loops, the first one for time points 'index' and the second one frequency bins 'freqind'.
If you have channel data, there will be 5,, iterations. Even if each iteration takes 1 ms, it will take 1. In theory, if we can use indices instead of for-loops, we can speed up the process.
For proof of concept, I modified the existing timefreq to replace the existing for-loops blocks with the following one. Generally, the difference becomes larger if more iteration is necessary in the original code. Noticeably, when I processed continuous data s, 50 freq bins, 0.
I would like to share this modified code for the beta test. Please download it from the link below and overwrite the existing timefreq.
Do not forget to take a back up of timefreq. My current concern is that this implementation is RAM-instensive, particularly for continuous data transformation. We can still push further to save RAM by optimizing code changing variables to single, overwrite variables instead of creating others with different names etc. You may want to perform a regression analysis between ERP amplitudes and trial-by-trial reaction times etc. If a vector of variables is correlated with that of , it is described as.
This solution consists 1 Obtain xyz coordinates of estimated equivalent dipoles from all. Note that I am only using dipole location for the clustering criterion if you wonder why, see this page.
Accordingly, my code below is also separated into two sections. I found that just copying all the files from one folder to another does not copy the precomputed results. If these are the all the things that needs to be updated, we can do it using the following code.
At least it worked for me. Typically, you need to enter subject names. Sometimes, group names as well. Rarely in may case, condition names. Entering these info one by one on GUI is horrible, particularly if you have many data sets. Here is code to do it automatically. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Skip to content.
Currently, you must add this new values manually. Type column distinguish among the different types of events. Duration column shows the length of the event. Figure 1. Figure 7. Plot EEG data and changing window length s. Figure 9. Figure By default, baseline removal will be applied to all channels. However, you can also choose specific channels by type can be specified while editing channel information , or manually select them.
Press Ok to subtract the baseline or Cancel to not remove the baseline. Even after data epochs have been extracted, it is possible to extract sub-epochs with a reduced time range. The example below would select data sub-epochs with the epoch time range from ms to ms. There is no real good reason to select subsets of data epochs. When comparing conditions — performed by creating contrast at the STUDY level the group analysis interface which may also be used for single-subject analysis — one may ignore specific data epochs.
Nevertheless, they may be cases in which you might want to remove specific artifactual or irrelevant data epochs. Select menu item File and press sub-menu item Load existing dataset. The simplest way to remove data epochs is by selecting epoch indices. Alternatively, epochs 1 to 10 may be removed by checking the checkbox adjacent to the epoch edit box.
In this section, we will keep all position 1 data epochs. Press Ok.
0コメント