qazwsxedchac 2 days ago

This is an utterly brilliant hack for dimensionality reduction leading to pattern recognition. That it even beats SVMs (albeit with a single carefully chosen example ;-) is icing on the cake.

One thing I don't understand is the addition of the constant 3 to the row index (in the paper just after formula 6). Intuitively this should be only 2, because the last row vector of the local topology lags the last state captured in the distance matrix by one row, and then we want to move ahead one more row to start forecasting.

What am I missing?

  • ano-ther 21 hours ago

    Isn’t it because m = n - 2 (above equation 4) and you want to get to n + 1?

    • qazwsxedchac 12 hours ago

      Yes, you're right. Off by one error on my part caused by concentrating on the bottom half of figure 1a while trying to visualize this and formulating my question.

motohagiography 2 days ago

naive question but can forecasting in a time series be applied backward to interpolate it? in a case like this FReT algorithm, the idea would be that the information in the FReT interpolated series (or SETAR, NNET, etc) would have a higher fidelity to the total information in the sequence.

  • ano-ther 21 hours ago

    Interesting idea.

    The FReT algorithm (if I understand correctly) works on equally spaced points in time and projects on the same grid into the future. So it would seem that interpolation is not possible with this method.

eli_gottlieb 2 days ago

Sorry, am I missing something? "Topology" here just seems to mean connectivity, and I can't even tell why they have a notion of 3x3 connectivity-matrix structure. A whole lot of this seems under-explained.

  • qazwsxedchac a day ago

    There's an earlier paper [0] involving the same authors which explains this a bit better.

    AIUI, they use the 3x3 neighbourhoods to capture local directional and curvature (i.e. gradient) information in the distance matrix. They then apply two heuristics (reduction to an 8-bit binary number and binning into sextiles) to reduce the floating point gradient information to coarse integers to aid pattern recognition.

    The more recent paper adds another heuristic (empirically chosen similarity threshold) to aid finding starting points of recurring patterns.

    [0] https://doi.org/10.1038/s41531-021-00240-4 , Equation (5) onwards.

    • ano-ther 21 hours ago

      Thanks. What I don’t understand is how searching for previous patterns that are similar helps in predicting timelines that are chaotic (it seems to be quite good at that).

      • qazwsxedchac 11 hours ago

        It only helps because the chaotic system under consideration has periodic components.

        The attractor shown in figure 1e has such periodic components, and identifying these does help, but only with very near term forecasting. When the accumulated forecast error crosses a threshold, it suddenly causes a large phase error, best seen from about point 75 onwards in the x and y components. From that point onwards the forecast is useless.

  • keithalewis a day ago

    You are missing what is missing. Their source code does not fill in the missing pieces.