-
Notifications
You must be signed in to change notification settings - Fork 256
Description
Hi, after running some tests on the eFAST model I found odd behaviour of the generated Confidence Intervals.
I tested the model
for varying a_1.
Currently, the
this is due to the random sampling done here as:
sample_idx = np.random.choice(T_data, replace=True, size=n_size)
This destroys the function incorporated in the model ouput.
Unfortunately, I do not know what the correct bootstrapping method would be. Two methods that could work would be:
- sorting after the random sampling
- using a smaller sample but retaining order and position (e.g. sample_idx = linspace(random_int,random_int + n_size) ... of course some overflow consideration is needed)
These Methods result in:
For comparison, I did the same analysis with Sobol and a kstest. My kstests implementation is consistent with the results of method 2 and the sobol implementation in this library suggests method 1. What are your Thoughts on this ?



