rLLM v0.1.1:Achieving Greater Uniformity Between Transform and Convolution!

rLLM v0.1.1 Released for Enhanced Uniformity ...

By Jianwu Zheng

It has been six months since the release of rLLM v0.1. Over this period, our team has remained committed to improving the system’s architecture, consistency, and overall usability. Guided by valuable user feedback and reported issues, we have implemented three significant improvements to the system:

  1. Complete Refinement of the Technical Roadmap from Transform to Convolution:

    We have clarified that the architecture of Tabular/Table Neural Networks (TNNs) and Graph Neural Networks (GNNs) is based on a Transform followed by multiple Convolution layers. The Transform typically refers to the essential preprocessing steps applied to the data before utilizing various methods. The Convolution layers, on the other hand, involve combining an input signal with a filter to produce an output signal.

  2. Expansion of the Library with Advanced TNNs and GNNs:

    We have added state-of-the-art TNNs and GNNs algorithms to the library, alongside several classic neural network methods as baselines. To facilitate stronger comparisons between different algorithms, we have included three representative single-table datasets from prior advanced TNNs:

  3. Unified and Optimized System Parameters:

    We have optimized the parameter configuration to enhance code readability and usability. Drawing inspiration from leading algorithm libraries like PyTorch[6] and torchvision[7], we have standardized the naming conventions used in rLLM. For more details, you can refer to the documentation.

rLLM has evolved into a more robust and feature-rich algorithm library, offering an expanded range of functionalities and enhanced capabilities. Additional functions and improvements are still under development. Stay tuned for more updates!

References:

[1] Chen J, Yan J, Chen Q, et al. Excelformer: A neural network surpassing gbdts on tabular data[J]. arXiv preprint arXiv:2301.02819, 2023.

[2] Chen K Y, Chiang P H, Chou H R, et al. Trompt: Towards a better deep neural network for tabular data[J]. arXiv preprint arXiv:2305.18446, 2023.

[3] Somepalli G, Goldblum M, Schwarzschild A, et al. Saint: Improved neural networks for tabular data via row attention and contrastive pre-training[J]. arXiv preprint arXiv:2106.01342, 2021.

[4] Wang Z, Ding H, Pan L, et al. From cluster assumption to graph convolution: Graph-based semi-supervised learning revisited[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024.

[5] Hu Z, Dong Y, Wang K, et al. Heterogeneous graph transformer[C]//Proceedings of the web conference 2020. 2020: 2704-2710.

[6] Ansel, J., Yang, E., He, H., Gimelshein, N., Jain, A., Voznesensky, M., Bao, B., Bell, P., Berard, D., Burovski, E., Chauhan, G., Chourdia, A., Constable, W., Desmaison, A., DeVito, Z., Ellison, E., Feng, W., Gong, J., Gschwind, M., Hirsh, B., Huang, S., Kalambarkar, K., Kirsch, L., Lazos, M., Lezcano, M., Liang, Y., Liang, J., Lu, Y., Luk, C., Maher, B., Pan, Y., Puhrsch, C., Reso, M., Saroufim, M., Siraichi, M. Y., Suk, H., Suo, M., Tillet, P., Wang, E., Wang, X., Wen, W., Zhang, S., Zhao, X., Zhou, K., Zou, R., Mathews, A., Chanan, G., Wu, P., & Chintala, S. (2024). PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation [Conference paper]. 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2 (ASPLOS ‘24). https://doi.org/10.1145/3620665.3640366

[7] TorchVision maintainers and contributors. (2016). TorchVision: PyTorch’s Computer Vision library [Computer software]. https://github.com/pytorch/vision

Tags: news