Skip to content

Commit

Permalink
update data
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Jan 13, 2025
1 parent 662c754 commit 97ec5bc
Show file tree
Hide file tree
Showing 64 changed files with 534 additions and 581 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:58 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:55 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2009-12-01</td>
<td>Inverse Problems</td>
<td>136</td>
<td>137</td>
<td>48</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:06:10 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:59 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -74,7 +74,7 @@ hide:
</td>
<td>2015-09-11</td>
<td>Proceedings of the National Academy of Sciences</td>
<td>3490</td>
<td>3502</td>
<td>68</td>
</tr>

Expand All @@ -98,7 +98,7 @@ hide:
</td>
<td>2020-05-05</td>
<td>Nature Communications</td>
<td>305</td>
<td>309</td>
<td>13</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:39 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:37 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -75,7 +75,7 @@ hide:
<td>2023-07-27</td>
<td>ArXiv</td>
<td>3</td>
<td>55</td>
<td>56</td>
</tr>

<tr id="Accurate forecasting of multivariate time series is an extensively studied subject in finance, transportation, and computer science. Fully mining the correlation and causation between the variables in a multivariate time series exhibits noticeable results in improving the performance of a time series model. Recently, some models have explored the dependencies between variables through end-to-end graph structure learning without the need for predefined graphs. However, current models do not incorporate the trade-off between efficiency and flexibility and lack the guidance of domain knowledge in the design of graph structure learning algorithms. This paper alleviates the above issues by proposing Balanced Graph Structure Learning for Forecasting (BGSLF), a novel deep learning model that joins graph structure learning and forecasting. Technically, BGSLF leverages the spatial information into convolutional operations and extracts temporal dynamics using the diffusion convolutional recurrent network. The proposed framework balance the trade-off between efficiency and flexibility by introducing Multi-Graph Generation Network (MGN) and Graph Selection Module. In addition, a method named Smooth Sparse Unit (SSU) is designed to sparse the learned graph structures, which conforms to the sparse spatial correlations in the real world. Extensive experiments on four real-world datasets demonstrate that our model achieves state-of-the-art performances with minor trainable parameters. Code will be made publicly available.">
Expand Down
16 changes: 8 additions & 8 deletions docs/recommendations/123acfbccca0460171b6b06a4012dbb991cde55b.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:40 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:38 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -50,7 +50,7 @@ hide:
</td>
<td>2024-03-12</td>
<td>ArXiv</td>
<td>85</td>
<td>88</td>
<td>18</td>
</tr>

Expand All @@ -62,8 +62,8 @@ hide:
</td>
<td>2024-02-04</td>
<td>ArXiv</td>
<td>7</td>
<td>67</td>
<td>8</td>
<td>68</td>
</tr>

<tr id="This research examines the use of Large Language Models (LLMs) in predicting time series, with a specific focus on the LLMTIME model. Despite the established effectiveness of LLMs in tasks such as text generation, language translation, and sentiment analysis, this study highlights the key challenges that large language models encounter in the context of time series prediction. We assess the performance of LLMTIME across multiple datasets and introduce classical almost periodic functions as time series to gauge its effectiveness. The empirical results indicate that while large language models can perform well in zero-shot forecasting for certain datasets, their predictive accuracy diminishes notably when confronted with diverse time series data and traditional signals. The primary finding of this study is that the predictive capacity of LLMTIME, similar to other LLMs, significantly deteriorates when dealing with time series data that contain both periodic and trend components, as well as when the signal comprises complex frequency components.">
Expand All @@ -86,7 +86,7 @@ hide:
</td>
<td>2024-06-22</td>
<td>ArXiv</td>
<td>18</td>
<td>19</td>
<td>3</td>
</tr>

Expand All @@ -109,9 +109,9 @@ hide:
Yong Liu, Haoran Zhang, Chenyu Li, Xiangdong Huang, Jianmin Wang, Mingsheng Long
</td>
<td>2024-02-04</td>
<td>DBLP, ArXiv</td>
<td>ArXiv, DBLP</td>
<td>26</td>
<td>67</td>
<td>68</td>
</tr>

<tr id="In this paper, we introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training. We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-shot inference excels in performance, efficiency, and simplicity. Our study provides compelling evidence that insights from other domains of artificial intelligence can be effectively applied to time series analysis. We conclude that large-scale time series models offer an exciting opportunity to democratize access to precise predictions and reduce uncertainty by leveraging the capabilities of contemporary advancements in deep learning.">
Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2023-10-05</td>
<td>ArXiv</td>
<td>66</td>
<td>68</td>
<td>5</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:42 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:40 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2022-09-20</td>
<td>IEEE Transactions on Knowledge and Data Engineering</td>
<td>91</td>
<td>92</td>
<td>18</td>
</tr>

Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2024-05-23</td>
<td>ArXiv</td>
<td>3</td>
<td>4</td>
<td>2</td>
</tr>

Expand All @@ -134,7 +134,7 @@ hide:
</td>
<td>2023-08-16</td>
<td>ArXiv</td>
<td>24</td>
<td>25</td>
<td>3</td>
</tr>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:41 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:38 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -62,7 +62,7 @@ hide:
</td>
<td>2024-02-13</td>
<td>ArXiv, DBLP</td>
<td>42</td>
<td>41</td>
<td>9</td>
</tr>

Expand Down Expand Up @@ -122,7 +122,7 @@ hide:
</td>
<td>2022-10-08</td>
<td>ArXiv</td>
<td>67</td>
<td>69</td>
<td>19</td>
</tr>

Expand Down
32 changes: 14 additions & 18 deletions docs/recommendations/279cd637b7e38bba1dd8915b5ce68cbcacecbe68.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:45 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:43 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -49,7 +49,7 @@ hide:
Andreas Doerr, Christian Daniel, Martin Schiegg, D. Nguyen-Tuong, S. Schaal, Marc Toussaint, Sebastian Trimpe
</td>
<td>2018-01-31</td>
<td>DBLP, ArXiv, MAG</td>
<td>MAG, ArXiv, DBLP</td>
<td>115</td>
<td>93</td>
</tr>
Expand Down Expand Up @@ -114,6 +114,18 @@ hide:
<td>61</td>
</tr>

<tr id="Over the past few years, research on deep graph learning has shifted from static graphs to temporal graphs in response to real-world complex systems that exhibit dynamic behaviors. In practice, temporal graphs are formalized as an ordered sequence of static graph snapshots observed at discrete time points. Sequence models such as RNNs or Transformers have long been the predominant backbone networks for modeling such temporal graphs. Yet, despite the promising results, RNNs struggle with long-range dependencies, while transformers are burdened by quadratic computational complexity. Recently, state space models (SSMs), which are framed as discretized representations of an underlying continuous-time linear dynamical system, have garnered substantial attention and achieved breakthrough advancements in independent sequence modeling. In this work, we undertake a principled investigation that extends SSM theory to temporal graphs by integrating structural information into the online approximation objective via the adoption of a Laplacian regularization term. The emergent continuous-time system introduces novel algorithmic challenges, thereby necessitating our development of GraphSSM, a graph state space model for modeling the dynamics of temporal graphs. Extensive experimental results demonstrate the effectiveness of our GraphSSM framework across various temporal graph benchmarks.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/919e5db29c7b7be4468b975eb4c0fa4a543165fc" target='_blank'>State Space Models on Temporal Graphs: A First-Principles Study</a></td>
<td>
Jintang Li, Ruofan Wu, Xinzhou Jin, Boqun Ma, Liang Chen, Zibin Zheng
</td>
<td>2024-06-03</td>
<td>ArXiv</td>
<td>2</td>
<td>12</td>
</tr>

<tr id="Time series modeling is a well-established problem, which often requires that methods (1) expressively represent complicated dependencies, (2) forecast long horizons, and (3) efficiently train over long sequences. State-space models (SSMs) are classical models for time series, and prior works combine SSMs with deep learning layers for efficient sequence modeling. However, we find fundamental limitations with these prior approaches, proving their SSM representations cannot express autoregressive time series processes. We thus introduce SpaceTime, a new state-space time series architecture that improves all three criteria. For expressivity, we propose a new SSM parameterization based on the companion matrix -- a canonical representation for discrete-time processes -- which enables SpaceTime's SSM layers to learn desirable autoregressive processes. For long horizon forecasting, we introduce a"closed-loop"variation of the companion SSM, which enables SpaceTime to predict many future time-steps by generating its own layer-wise inputs. For efficient training and inference, we introduce an algorithm that reduces the memory and compute of a forward pass with the companion matrix. With sequence length $\ell$ and state-space size $d$, we go from $\tilde{O}(d \ell)$ na\"ively to $\tilde{O}(d + \ell)$. In experiments, our contributions lead to state-of-the-art results on extensive and diverse benchmarks, with best or second-best AUROC on 6 / 7 ECG and speech time series classification, and best MSE on 14 / 16 Informer forecasting tasks. Furthermore, we find SpaceTime (1) fits AR($p$) processes that prior deep SSMs fail on, (2) forecasts notably more accurately on longer horizons than prior state-of-the-art, and (3) speeds up training on real-world ETTh1 data by 73% and 80% relative wall-clock time over Transformers and LSTMs.">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/a7d68b1702af08ce4dbbf2cd0b083e744ae5c6be" target='_blank'>Effectively Modeling Time Series with Simple Discrete State Spaces</a></td>
Expand All @@ -126,22 +138,6 @@ hide:
<td>46</td>
</tr>

<tr id="

Gaussian state space models have been used for decades as generative models of sequential data. They admit an intuitive probabilistic interpretation, have a simple functional form, and enjoy widespread adoption. We introduce a unified algorithm to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks. Our learning algorithm simultaneously learns a compiled inference network and the generative model, leveraging a structured variational approximation parameterized by recurrent neural networks to mimic the posterior distribution. We apply the learning algorithm to both synthetic and real-world datasets, demonstrating its scalability and versatility. We find that using the structured approximation to the posterior results in models with significantly higher held-out likelihood.

">
<td id="tag"><i class="material-icons">visibility_off</i></td>
<td><a href="https://www.semanticscholar.org/paper/2af17f153e3fd71e15db9216b972aef222f46617" target='_blank'>Structured Inference Networks for Nonlinear State Space Models</a></td>
<td>
R. G. Krishnan, Uri Shalit, D. Sontag
</td>
<td>2016-09-30</td>
<td>DBLP, ArXiv, MAG</td>
<td>440</td>
<td>48</td>
</tr>

</tbody>
<tfoot>
<tr>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ hide:

<body>
<p>
<i class="footer">This page was last updated on 2025-01-06 06:05:59 UTC</i>
<i class="footer">This page was last updated on 2025-01-13 06:05:56 UTC</i>
</p>

<div class="note info" onclick="startIntro()">
Expand Down Expand Up @@ -86,7 +86,7 @@ hide:
</td>
<td>2020-11-22</td>
<td>Biophysical Reviews</td>
<td>38</td>
<td>39</td>
<td>5</td>
</tr>

Expand All @@ -111,7 +111,7 @@ hide:
<td>2022-10-28</td>
<td>Annual Conference of the PHM Society</td>
<td>0</td>
<td>19</td>
<td>20</td>
</tr>

<tr id="Unsteady fluid systems are nonlinear high-dimensional dynamical systems that may exhibit multiple complex phenomena in both time and space. Reduced Order Modeling (ROM) of fluid flows has been an active research topic in the recent decade with the primary goal to decompose complex flows into a set of features most important for future state prediction and control, typically using a dimensionality reduction technique. In this work, a novel data-driven technique based on the power of deep neural networks for ROM of the unsteady fluid flows is introduced. An autoencoder network is used for nonlinear dimension reduction and feature extraction as an alternative for singular value decomposition (SVD). Then, the extracted features are used as an input for a long short-term memory (LSTM) network to predict the velocity field at future time instances. The proposed autoencoder-LSTM method is compared with non-intrusive reduced order models based on dynamic mode decomposition (DMD) and proper orthogonal decomposition. Moreover, an autoencoder-DMD algorithm is introduced for ROM, which uses the autoencoder network for dimensionality reduction rather than SVD rank truncation. The results show that the autoencoder-LSTM method is considerably capable of predicting fluid flow evolution, where higher values for the coefficient of determination R2 are obtained using autoencoder-LSTM compared to other models.">
Expand All @@ -122,7 +122,7 @@ hide:
</td>
<td>2020-07-02</td>
<td>ArXiv</td>
<td>142</td>
<td>143</td>
<td>23</td>
</tr>

Expand Down
Loading

0 comments on commit 97ec5bc

Please sign in to comment.