Mohamed Bouadi
Pratinav Seth
Aditya Tanna
Vinay Kumar Sankarapu
Research

Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning

November 4, 2025
Paris Lab
Mumbai Lab

1  Introduction

Tabular data remain the most prevalent form of data in real-world applications, spanning critical systems across healthcare, finance, and scientific research. Despite the remarkable progress of deep learning in natural language processing [27, 42] and computer vision [11], gradient boosted trees (GBTs) remain the predominant state-of-the-art (SOTA) for tabular prediction tasks. In other data modalities, foundation models—particularly Large Language Models (LLMs) [46, 26]—have significantly advanced the ability to tackle new tasks and few-shot learning. This is largely due to their remarkable in-context learning (ICL) capabilities [45, 4], which enable them to capture patterns directly from prompts without updating their parameters. This success combined with the pervasiveness of tables have spurred interest in tabular foundation models [38].

Although LLMs are primarily designed to process natural language, recent efforts have explored fine-tuning them for tabular data tasks [14, 8]. These approaches typically rely on table serialization, which is the process of converting table rows into text or sentences suitable for tokenization. For instance, [9] fine-tuned a Llama 3-8B model on a large corpus of serialized tables and demonstrated that this strategy can outperform traditional tree-based models in few-shot scenarios. However, such language model–based approaches face inherent challenges. Their limited context windows restrict the number of serialized examples that can be processed simultaneously (e.g., up to 32 or 64 shots in [9]), and it remains uncertain whether LLMs can reliably interpret and reason over numerical values [37].

Adopting a fundamentally different strategy, the authors of [16] introduced TabPFN, a transformer-based tabular foundation model designed for classification tasks and pretrained exclusively on synthetic tabular data. A key feature of TabPFN is its ability to perform in-context learning directly on tables, removing the need for tokenization and allowing efficient processing of relatively small datasets—up to 1K samples and 100 features. Building on this foundation, TabICL [32] introduced a simplified three-component architecture comprising: (1) column-wise embeddings via Set Transformers to capture distribution-aware feature semantics, (2) row-wise interactions with rotary positional encodings to model inter-feature dependencies, and (3) dataset-level ICL prediction through split attention, ensuring a clear separation between training and test samples. These developments position tabular foundation models as a compelling alternative to traditional approaches, particularly for zero-shot prediction tasks where dataset-specific training is infeasible.

However, current table-native ICL architectures face several fundamental limitations that hinder their practical deployment and scalability. First, existing tabular ICL architectures, including TabICL, process features uniformly at a single scale, missing hierarchical interaction patterns that naturally occur in real-world tabular data. Just as computer vision benefits from multi-scale processing—capturing edges at fine scales and objects at coarse scales—tabular data exhibits structure at multiple granularities: individual features interact locally (e.g., age and income), feature clusters form semantic groups (e.g., demographic attributes), and high-level blocks represent major data divisions (e.g., personal attributes versus behavioral patterns). Processing all features uniformly fails to capture these hierarchical relationships, limiting the model’s ability to learn robust and interpretable representations.

Second, the dense attention mechanisms scale quadratically with feature count (O​(m2)), where m denotes the number of features. While TabICL addresses sample scalability through its column-then-row architecture, the quadratic feature complexity becomes computationally prohibitive for high-dimensional tables with more than 100 features common in genomics, finance, and sensor applications. For tables with m=100 features, dense attention requires 10,000 attention operations per layer, with memory requirements growing quadratically. This fundamental scalability barrier limits the practical deployment of tabular foundation models on wide real-world datasets.

Third, the strictly sequential processing pipeline in TabICL (column embedding → row interaction → ICL prediction) prevents iterative refinement and bidirectional information flow between architectural components. While each component produces rich representations, the unidirectional nature of the pipeline means that downstream insights (e.g., dataset-level patterns discovered during ICL) cannot inform upstream representations (e.g., refining feature embeddings based on dataset context). This limitation constrains the model’s ability to leverage holistic dataset understanding for improved predictions, and prevents the kind of iterative refinement that has proven beneficial in multimodal architectures.

To address these limitations, we introduce Orion-MSP, a novel tabular foundation model that extends TabICL with three synergistic architectural innovations. First, we propose multi-scale hierarchical feature processing that simultaneously captures interactions at multiple granularities (individual features, groups of 4, and groups of 16), enabling the model to learn representations at different levels of abstraction analogous to hierarchical processing in computer vision. Second, we design structured block-sparse attention patterns combining windowed local attention, global tokens for long-range dependencies, and random connectivity for universal approximation, reducing computational complexity from O​(H2) to O(H.logH) while maintaining expressiveness. Third, we introduce Perceiver-style cross-component memory that enables bidirectional information flow between architectural stages while provably maintaining in-context learning safety constraints—ensuring test data never influences training representations through formal ICL safety analysis.

The column-wise embedding component of Orion-MSP follows TabICL’s approach [32], using Set Transformers with Induced Set Attention Blocks (ISAB) [25] to create distribution-aware feature embeddings in a permutation-invariant manner. The multi-scale row interaction component processes these embeddings at multiple resolutions, with each scale using sparse attention patterns tailored to its granularity. The resulting multi-scale representations are aggregated into unified row embeddings, which then interact with the Perceiver memory before proceeding to the final ICL prediction stage. This ICL component employs split attention with label injection, ensuring proper train-test separation.

Through extensive experiments across diverse tabular benchmarks, we demonstrate that Orion-MSP achieves competitive accuracy with state-of-the-art tabular ICL methods while enabling scalability to tables with more than 100 features where existing methods fail due to memory constraints. Our work establishes that hierarchical multi-scale processing, structured sparsity, and cross-component memory can simultaneously improve both effectiveness and efficiency in tabular foundation models, opening new application domains previously inaccessible to tabular in-context learning methods.

2  Related Work

Tabular In-Context Learning: The application of in-context learning (ICL) to tabular data has recently attracted significant attention. TabPFN [16] pioneered this direction by meta-training a transformer on synthetically generated datasets using structural causal models. Its encoder–decoder design allows test samples to attend to training examples, enabling zero-shot predictions without gradient-based fine-tuning. While TabPFN demonstrated strong performance on small datasets, its alternating column- and row-wise attention mechanisms make scaling to larger tables computationally prohibitive.

TabDPT [28] showed that comparable performance can be achieved on real-world datasets by using similarity-based retrieval to construct contextual examples—an idea first explored in TabR [12]. The authors extended this paradigm by integrating diffusion-based representation learning, improving robustness to missing values and distributional shifts. However, the diffusion process introduces substantial computational overhead and retains dense attention, limiting scalability. Similarly, TabPFN-v2 [17] introduced cell-based in-context learning, extending row-wise encoding to datasets exceeding 10,000 samples, but it still inherits quadratic attention costs in high-dimensional tables.

Building on these foundations, TabICL [32] proposed a table-native transformer architecture with three components: column embedding via Set Transformers, row-wise interaction with rotary positional encodings, and an in-context learning prediction module. This design achieved state-of-the-art results across diverse benchmarks while maintaining architectural simplicity and training efficiency. Nonetheless, dense attention in row interactions and the strictly sequential pipeline limit iterative refinement, cross-component communication, and scalability to tables with more than 100 features.

ContextTab [35] further enhanced tabular in-context learning by incorporating contextualized feature embeddings and attention mechanisms tailored for heterogeneous tabular data. While improving performance in complex datasets, it still processes features at a single scale and relies on dense attention, limiting computational efficiency on high-dimensional tables.

Collectively, existing tabular in-context learning models demonstrate strong performance yet share core limitations: dense quadratic attention, uniform single-scale processing, and lack of cross-component feedback.

Sparse Attention Mechanisms: Sparse attention techniques from natural language processing offer a promising route to improve computational efficiency in tabular in-context learning. BigBird [43] and Longformer [1] demonstrated that block-sparse attention patterns can approximate dense attention with linear complexity while maintaining strong theoretical guarantees. Similarly, Sparse Transformers [5, 20] employ structured sparsity for generative modeling, reducing computation without substantial performance degradation. Despite their success in sequential data, these methods have yet to be systematically adapted for tabular in-context learning, where the primary challenge lies in feature dimension rather than sequence length.

Hierarchical and Multi-Scale Architectures: Hierarchical architectures have proven effective in other domains. Funnel Transformers [6] and Swin Transformers [30] use multi-scale processing and pooling to capture information at different resolutions, while Set Transformers [34, 10] leverage pooling by multihead attention for permutation-invariant set processing. Although TabICL [32] employs Set Transformers for column embeddings, it does not incorporate hierarchical multi-scale processing or iterative pooling across feature groups, limiting its ability to model complex interactions in high-dimensional tables.

Cross-Component Communication: Cross-component memory and iterative refinement have shown success in multimodal learning. Perceiver [19] and Perceiver IO [18] introduce latent bottlenecks to compress and share information across modalities, and vision-language models [39] leverage iterative cross-attention for refinement. However, these approaches do not address the causal constraints of in-context learning, where test examples must never influence the representation of training data, leaving a gap for tabular in-context learning.

3  Proposed Approach: Orion-MSP

3.1  Problem Formulation

Consider a tabular dataset \( \mathcal{D} = \{(x_i, y_i)\}_{i=1}^{n} \) with \( n \) samples and \( m \) features. Let \( X \in \mathbb{R}^{n \times m} \) denote the feature matrix, where each column \( \mathbf{c}_j \in \mathbb{R}^{n} \) \((j \in \{1, \ldots, m\})\) represents the values of the \( j\)-th feature across all samples.

In the in-context learning setting, we are given a context set of \( n_{\text{train}} \) labeled examples:

\[ \mathcal{C} = \{(x_i, y_i)\}_{i=1}^{n_{\text{train}}} \]

and a set of \( n_{\text{test}} \) query samples:

\[ \mathcal{Q} = \{ x_i \}_{i=1}^{n_{\text{test}}} \]

Our goal is to predict the conditional distribution of the target for each query sample given the context set:

\[ p(y \mid x, \mathcal{C}), \quad \forall\, x \in \mathcal{Q}. \]

3.2  High-level Structure: From Data to ICL

Orion-MSP consists of four core components that collectively enable efficient and generalizable tabular in-context learning: (1) Column Embedding: transforms raw tabular features into dense, semantically meaningful representations; (2) Multi-Scale Sparse Row Interaction: captures dependencies at multiple granularities via an hierarchy of attention scales, combining CLS and GLOBAL tokens for local and long-range connectivity; (3) Cross-Component Perceiver Memory: introduces a latent memory bottleneck that enables safe bidirectional communication between modules, promoting iterative refinement without information leakage; (4) Dataset-wise In-Context Learning Predictor: leverages the enriched representations to perform zero-shot prediction across new tasks without gradient updates. An overview of the complete architecture is shown in Figure 1.

Refer to caption
Figure 1:An overview of Orion-MSP architecture. First, column-wise embedding transforms input table into embedding vectors E. Next, multi-scale sparse row interaction prepends learnable [CLS] and [GLOBAL] tokens to E, processes features at multiple granularities (scales 1, 4, and 16) with sparse attention transformers, and aggregates [CLS] outputs across scales to yield row embeddings H. Cross-component Perceiver memory enables bidirectional communication: training rows write to latent memory, which all rows read for enhanced representations R. Finally, ICL predicts test labels from R in a single forward pass.

Orion-MSP extends the original TabICL [32] architecture with three complementary innovations designed to address the fundamental challenges of tabular data processing: computational inefficiency, limited feature interaction modeling, and the need for hierarchical pattern recognition. Our approach maintains the core in-context learning paradigm while introducing architectural enhancements that significantly improve both efficiency and performance.

3.3  Column-wise Embedding

Tabular data exhibits unique characteristics compared to other modalities: each column represents a distinct feature with its own distribution, scale, and statistical properties (e.g., mean, variance, skewness, kurtosis). To capture these distributional characteristics, we adopt the original TabICL column-wise embedder to map each scalar cell in a column \( \mathbf{c}_j \in \mathbb{R}^{n} \) to a \( d\)-dimensional representation using a shareable Set Transformer, \( \mathrm{TF}_{\text{col}} \), that treats the column as a permutation-invariant set of values. Our goal is to transform each cell value \( X_{ij} \) into a \( d\)-dimensional embedding \( \mathbf{E}_{ij} \in \mathbb{R}^{d} \) that encodes both:

  1. The value of the cell \( (X_{ij}) \)
  2. The distributional context of the column \( (\mathbf{c}_j) \)

This differs fundamentally from standard embedding approaches (e.g., word embeddings) where each discrete token has a fixed embedding regardless of context. In tabular data, the meaning of a value depends heavily on the column’s distribution: a value of 50 may be typical in one feature but an outlier in another.

Concretely, \( \mathrm{TF}_{\text{col}} \) predicts a per-cell affine map, assigning each cell its own weight and bias. The process consists of three main steps:

3.3.1  Initial Projection

Project the column values into a \( d\)-dimensional embedding space:

\[ \mathbf{U}_j \;=\; \mathrm{Linear}_{\mathrm{proj}}(\mathbf{c}_j) \;\in\; \mathbb{R}^{n \times d} \tag{1} \]

where \( \mathrm{Linear}_{\mathrm{proj}} : \mathbb{R} \rightarrow \mathbb{R}^{d} \) is a learned linear transformation. This creates initial token embeddings for each cell in the column.

3.3.2  Induced Set Attention Blocks (ISAB)

To efficiently capture global distributional information while maintaining computational tractability, we employ ISAB with \( k \) learnable inducing points. It consists of two sequential Multi-Head Attention Blocks (\( \mathrm{MAB}_1, \mathrm{MAB}_2 \)):

\[ \mathbf{M}_j \;=\; \mathrm{MAB}_1(\mathbf{I}, \mathbf{U}_j^{\text{train}}, \mathbf{U}_j^{\text{train}}) \;\in\; \mathbb{R}^{k \times d} \tag{2} \]
\[ \mathbf{V}_j \;=\; \mathrm{MAB}_2(\mathbf{U}_j, \mathbf{M}_j, \mathbf{M}_j) \;\in\; \mathbb{R}^{n \times d} \tag{3} \]

where \( \mathbf{I} \in \mathbb{R}^{k \times d} \) denotes \( k \) trainable inducing point embeddings (\( k \ll n \)), which serve as a compressed representation of the column distribution.

We define a Multi-Head Attention Block as:

\[ \mathrm{MAB}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) \;=\; \mathrm{LayerNorm}\!\left(\mathbf{H} + \mathrm{MultiHead}(\mathbf{Q}, \mathbf{K}, \mathbf{V})\right) \tag{4} \]

where \( \mathbf{H} \) is a residual connection (set to \( \mathbf{Q} \) if dimensions match, otherwise passed through a projection), and:

\[ \mathrm{MultiHead}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) \;=\; \mathrm{Concat}(\text{head}_{1}, \ldots, \text{head}_{h}) \, \mathbf{W}_{O} \tag{5} \]

with each head defined as:

\[ \text{head}_i = \text{Attention}(\mathbf{Q}\mathbf{W}_Q^i, \mathbf{K}\mathbf{W}_K^i, \mathbf{V}\mathbf{W}_V^i) \tag{6} \]

Following TablCL [32], we use \( d = 128, k = 128 \), 4 heads, and 3 ISAB blocks. Crucially, in Equation 2, we use only training samples \( \mathbf{U}_j^{\text{train}} \in \mathbb{R}^{n_{\text{train}} \times d} \) as keys and values. This ensures that the inducing points \( \mathbf{M}_j \) capture the distribution of the training data only, preventing information leakage from test samples during embedding. This is crucial for maintaining the in-context learning paradigm.

In Equation 3, all samples (training and test) query the inducing points to obtain their contextualized embeddings. The inducing points act as a distributional summary: they encode statistical properties (e.g., mean, variance, skewness) of the training column values, and each cell embedding is adjusted based on where it lies within this learned distribution.

3.3.3 Weight and Bias Generation:

The ISAB output \( \mathbf{V}_j \) is passed through a feedforward network to generate cell-specific weights and biases:

\[ \mathbf{W}_j, \mathbf{B}_j = \text{FFN}(\mathbf{V}_j), \quad \mathbf{W}_j, \mathbf{B}_j \in \mathbb{R}^{n \times d} \tag{7} \]

where:

\[ \text{FFN}(\mathbf{V}_j) = \text{Linear}_{\text{out}}(\text{GELU}(\text{Linear}_{\text{hidden}}(\mathbf{V}_j))) \tag{8} \]

The final embeddings are then computed as:

\[ \mathbf{E}_{\cdot, j, :} = \mathbf{W}_j \odot \mathbf{c}_j + \mathbf{B}_j \in \mathbb{R}^{n \times d} \tag{9} \]

where \( \odot \) denotes the element-wise (Hadamard) product, and \( \mathbf{c}_j \) is broadcast to shape \((n, d)\). This formulation allows each cell’s embedding to be a function of both its raw value (\( \mathbf{c}_j \)) and the column’s learned distributional properties (\( \mathbf{W}_j, \mathbf{B}_j \)).

Note that, in our architecture, row-wise interaction requires prepending special tokens (e.g., [CLS], [GLOBAL]) to each row. To accommodate these, the column embedding reserves \( C \) positions at the beginning of each column:

\[ \mathbf{E} \in \mathbb{R}^{n \times (m + C) \times d} \tag{10} \]

For the reserved positions (indices \(1 \leq i \leq C\)), we use a skippable linear layer that outputs zeros or small random values:

\[ \mathbf{E}_{\cdot, j, :} = \begin{cases} \text{SkipLinear}(\mathbf{c}_j) & \text{if } j \leq C \text{ (reserved)} \\ \mathbf{W}_j \odot \mathbf{c}_j + \mathbf{B}_j & \text{if } j > C \text{ (features)} \end{cases} \tag{11} \]

where SkipLinear is a linear layer with very small initialization, allowing the model to learn appropriate embeddings for reserved positions during training.

The Set Transformer architecture ensures that \( \text{TF}_{\text{col}} \) is permutation-invariant with respect to the order of samples within a column. Formally, let \( \pi : [T] \to [T] \) be any permutation, and let \( \mathbf{c}_j' = \mathbf{P}_\pi \mathbf{c}_j \) where \( \mathbf{P}_\pi \) is the corresponding permutation matrix. Then:

\[ \text{TF}_{\text{col}}(\mathbf{c}_j') = \mathbf{P}_\pi \text{TF}_{\text{col}}(\mathbf{c}_j) \tag{12} \]

This property is inherited from the attention mechanism in ISAB, where the softmax normalization and weighted aggregation are invariant to input order.

The inducing points \( \mathbf{M}_j \in \mathbb{R}^{k \times d} \) learned by the first MAB serve as a distributional summary of column \( j \). Empirically, we observe that:

  • Columns with similar statistical moments (mean, variance, skewness, kurtosis) have similar inducing point representations (measured by cosine similarity).
  • The inducing points capture multi-modal distributions: for categorical features encoded numerically, different modes correspond to different cluster centers in the inducing-point space.
  • Outliers in \( \mathbf{c}_j \) receive distinct embeddings, as their attention weights to \( \mathbf{M}_j \) differ significantly from typical values.

3.4  Multi-Scale Sparse Row-Wise Interaction

While column-wise embedding captures distributional properties of individual features, row-wise interaction must model complex dependencies across features to extract meaningful sample representations. However, directly applying dense self-attention to all feature tokens incurs quadratic complexity O​(m2) and may overfit when the number of features varies significantly across datasets. To address these challenges, we introduce a hierarchical multi-scale sparse attention mechanism that processes features at multiple granularities with efficient block-sparse patterns.

3.4.1  Motivation and Design Principles

Tabular datasets exhibit several unique characteristics that complicate feature interaction modeling:

  1. Variable feature counts: The number of features m varies dramatically across datasets, making fixed-scale architectures suboptimal.
  2. Heterogeneous feature relationships: Some features interact locally (e.g., age and age-related health metrics), while others have global dependencies (e.g., categorical indicators).
  3. Computational constraints: Dense attention over m features has complexity O​(m2), becoming prohibitive for wide tables or long context windows.
  4. Overfitting risks: Full attention can memorize training-specific feature correlations that do not generalize to new datasets.

Inspired by hierarchical representations in vision [41] and multi-resolution modeling in speech [44], Orion-MSP decomposes feature interactions into multiple resolution levels:

  • Fine scale (s=1): Captures detailed pairwise dependencies between individual features.
  • Coarse scales (s>1): Aggregates semantically related features into groups, reducing sequence length and enabling broader contextual reasoning.Scale aggregation: Combines representations across scales to balance local precision and global context.
Refer to caption
Figure 2:Building blocks of the attention mechanism used in Orion-MSP. White color indicates absence of attention. (a) special attention includes C​L​S=4 and global attention with G​B=4, (b) sliding window attention with w=8, (c) random attention with r=2, (d) the combined row representation of Orion-MSP model.

To further improve efficiency and generalization, we adopt a block-sparse attention pattern inspired by Longformer [1] and BigBird [43], as depicted in Figure 2:

  • Sliding window attention: Local connectivity within a fixed radius w, preserving fine-grained structure.
  • Global tokens: Specialized tokens with full connectivity, ensuring stable long-range information flow.
  • Random links: Optional sparse stochastic connections that enhance expressivity and global reachability.

This design reduces attention complexity from \(O(m^2)\) to \(O(m \cdot (w + g + r))\), where \(w, g, r\) are the window size, the number of global tokens, and the number of random links, respectively.

Formally, the multi-scale sparse row-wise transformer, \( \text{TF}^{\text{MS}}_{\text{row}} \), processes column-embedded features \( \mathbf{E} \in \mathbb{R}^{B \times n \times (m + C) \times d} \) to generate row-wise embeddings \( \mathbf{H} \in \mathbb{R}^{B \times n \times (N_{\text{cls}} \cdot d)} \):

\[ \mathbf{H} = \text{TF}^{\text{MS}}_{\text{row}}(\mathbf{E}, \mathbf{d}_{\text{valid}}) \in \mathbb{R}^{B \times n \times (N_{\text{cls}} \cdot d)} \tag{13} \]

where \(B\) is the number of datasets, \(n\) the number of samples per dataset, \(m\) the number of features, and \(d\) the embedding dimension. The constant \( C = N_{\text{cls}} + N_{\text{global}} \) accounts for special token slots, and \(\mathbf{d}_{\text{valid}} \in \mathbb{R}^B\) optionally indicates the number of valid features per dataset for handling variable-length inputs.

The transformation proceeds through the following steps:

3.4.2 Multi-Scale Feature Grouping

First, for each scale \( s \in \mathcal{S} = \{ s_1, s_2, \ldots, s_M \} \) (e.g., \( \mathcal{S} = \{1, 4, 16\} \)), we group the \(m\) feature tokens into \( K_s = \lceil m / s \rceil \) groups of size \(s\).

The default grouping strategy uses a learnable soft grouping via Pooling by Multihead Attention (PMA) [24] to adaptively attend to features:

\[ \begin{aligned} \mathbf{Q}_s &= \text{Seed}_s + \text{PE}(\mathbf{K}_s), \quad \mathbf{Q}_s \in \mathbb{R}^{K_s \times d} \\ \mathbf{K}_s &= \text{Linear}_k(\mathbf{E}_{\cdot, :, :}), \quad \mathbf{K}_s \in \mathbb{R}^{B \times n \times m \times d} \\ \mathbf{V}_s &= \text{Linear}_v(\mathbf{E}_{\cdot, :, :}), \quad \mathbf{V}_s \in \mathbb{R}^{B \times n \times m \times d} \\ \mathbf{A}_s &= \text{softmax}\!\left( \frac{\mathbf{Q}_s \mathbf{K}_s^{\top}}{\sqrt{d}} \right), \quad \mathbf{A}_s \in \mathbb{R}^{K_s \times n} \\ \mathbf{F}_s &= \mathbf{A}_s \mathbf{V}_s, \quad \mathbf{F}_s \in \mathbb{R}^{B \times n \times K_s \times d} \end{aligned} \]

where \( \text{Seed}_s \in \mathbb{R}^{K_s \times d} \) is a learnable seed embedding, and \(\text{PE}(\mathbf{K}_s)\) adds sinusoidal positional encodings. PMA allows the model to learn which features to group together, adapting to dataset-specific correlation structures.

3.4.3 Special Tokens Injection

For each row at each scale, we prepend special tokens:

  1. \( \text{CLS} \in \mathbb{R}^{N_{\text{cls}} \times d} \) (learnable, per-row summary)
  2. \( \text{GLOBAL} \in \mathbb{R}^{N_{\text{global}} \times d} \) (learnable, long-range connectivity)

The full sequence at scale \(s\) becomes:

\[ \mathbf{X}_s = [\text{CLS}, \text{GLOBAL}, \mathbf{G}^s] \in \mathbb{R}^{B \times n \times (N_{\text{special}} + K_s) \times d} \tag{14} \]

where \( N_{\text{special}} = N_{\text{cls}} + N_{\text{global}} \).

3.4.4 Block-Sparse Attention Mask

As depicted in Figure 2, for each scale, we construct a sparse attention mask \( \mathbf{M}_s \in \mathbb{R}^{L_k \times L_s} \), where \( L_s = N_{\text{special}} + K_s \). The mask follows specific sparsity rules to control information flow across scales efficiently.

Structured Sparse Attention Mask

  1. Fully Connected Special Tokens: The first \( N_{\text{special}} \) tokens (CLS and GLOBAL) are fully connected to all other tokens and to each other.
    \[ \mathbf{M}_s[i,j] \;=\; 0 \quad \forall\, i \in [1, N_{\text{special}}] \ \text{or}\ j \in [1, N_{\text{special}}] \tag{15} \]
  2. Sliding Window Attention: Feature tokens (indices \(> N_{\text{special}}\)) attend to neighbors within a window of radius \( w = 8 \).
    \[ \mathbf{M}_s[i,j] \;=\; \begin{cases} 0, & \text{if } |i-j|\le w \ \text{and}\ i,j > N_{\text{special}} \\ -\infty, & \text{otherwise} \end{cases} \tag{16} \]
  3. Random Links (Optional): For each feature token \( i > N_{\text{special}} \), randomly select \( r \) additional tokens to attend to.
    \[ \mathbf{M}_s[i, j_k] \;=\; 0 \quad \text{for } k \in [1,r], \quad j_k \sim \mathrm{Uniform}\!\big(\{N_{\text{special}}{+}1,\ldots,L_s\}\setminus\{i\}\big) \tag{17} \]
    The final mask ensures self-attention is always allowed:
    \[ \mathbf{M}_s[i,i] \;=\; 0 \quad \forall\, i \in [1, L_s] \tag{18} \]

3.4.5  Transformer Encoder per Scale

For each scale \( s \in \mathcal{S} \), apply a dedicated Transformer encoder:

\[ \mathbf{Z}_s \;=\; \mathrm{Encoder}_s(\mathbf{X}_s, \mathbf{M}_s) \;\in\; \mathbb{R}^{B \times n \times L_s \times d} \tag{19} \]

where \( \mathrm{Encoder}_s \) consists of \( N^{\text{tot}}_{\text{blocks}}/|\mathcal{S}| \) stacked Transformer blocks with:

  • Multi-head self-attention: \( \mathrm{MHA}(\mathbf{Q},\mathbf{K},\mathbf{V},\mathbf{M}_s) \) with sparse mask \( \mathbf{M}_s \)
  • Rotary positional encoding (RoPE): applied to queries and keys before attention
  • Feed-forward network: two-layer MLP with GELU activation
  • Pre-norm architecture: Layer normalization before each sub-layer

The multi-head attention with sparse masking is computed as:

\[ \text{head}_i \;=\; \mathrm{Attention}(\mathbf{Q}, \mathbf{K}_i, \mathbf{V}_i, \mathbf{M}_s) \tag{20} \]
\[ \text{head}_i \;=\; \mathrm{softmax}\!\left(\frac{\mathbf{Q}\mathbf{K}_i^{\top}}{\sqrt{d_k}} + \mathbf{M}_s\right)\mathbf{V}_i \tag{21} \]

where \( \mathbf{M}_s \) contains \(0\) for allowed positions and \(-\infty\) for disallowed positions (additive masking).

After processing through \( \mathrm{Encoder}_s \), extract the CLS token representations:

\[ \mathbf{H}_s \;=\; \mathbf{Z}_s[:, :, 1:N_{\text{cls}}, :] \;\in\; \mathbb{R}^{B \times n \times N_{\text{cls}} \times d} \tag{22} \]

Aggregate the representations across all scales by averaging:

\[ \mathbf{H}_{\text{agg}} \;=\; \frac{1}{|\mathcal{S}|} \sum_{s \in \mathcal{S}} \mathbf{H}_s \;\in\; \mathbb{R}^{B \times n \times N_{\text{cls}} \times d} \tag{23} \]

This simple averaging strategy ensures that each scale contributes equally, balancing fine-grained and coarse-grained information. Next, the CLS tokens are flattened and normalized to produce the final row embeddings:

\[ \mathbf{H} = \text{LayerNorm}(\text{Flatten}(\mathbf{H}_{\text{agg}})) \in \mathbb{R}^{B \times n \times (N_{\text{cls}} \cdot d)} \tag{24} \]

where Flatten concatenates the \( N_{\text{cls}} \) token embeddings.

Algorithm 1 summarizes the complete multi-scale sparse row-wise interaction process:

Algorithm 1 Multi-Scale Sparse Row-Wise Interaction (TF_row^MS)
0:  Embeddings E ∈ ℝ^{B×n×(m+C)×d}, valid features d_valid ∈ ℝ^B
1:  Scales S = {s₁, s₂, …, s_M}, window w, random links r
2:  Row embeddings H ∈ ℝ^{B×n×(N_cls·d)}
3:  Initialize learnable tokens CLS ∈ ℝ^{N_cls×d}, GLOBAL ∈ ℝ^{N_global×d}
4:  H_all = ∅   // Store CLS outputs from all scales
5:  for each scale s ∈ S do
6:      K_s ← ⌈m / s⌉  // Number of groups at scale s
7:      // Feature Grouping
8:      G^(s) = PMA(E[:, :, :], K_s)
9:      // Construct Sequence
10:     X_s = [CLS, GLOBAL, G^(s)] (Shape: (B, n, L_s, d), where L_s = N_special + K_s)
11:     // Build Sparse Mask
12:     M_s = BuildBlockSparseMask(L_n, N_special, w, r)
13:     // Process Through Transformer
14:     H_s = Encoder(X_s, M_s) (Transformer with RoPE and sparse attention)
15:     // Extract CLS Tokens
16:     H_s = H_s[:, :N_cls, :]
17:     H_all.append(H_s)
18:  end for
19:  // Aggregate Across Scales
20:  H_agg = (1 / |S|) Σ_{s=1}^{|S|} H_all[s]
21:  // Flatten and Normalize
22:  H = LayerNorm(Flatten(H_agg))
23:  return H
  

3.4.6 Computational Complexity

For a given scale \(s\) with \( K_s = \lceil m / s \rceil \) grouped feature tokens, the per-layer computational complexity of the sparse attention mechanism is:

\[ O(B \cdot n \cdot L_s \cdot (w + N_{\text{global}} + r) \cdot d) \tag{25} \]

where \(B\) is the batch size, \(n\) the number of samples per dataset, \(m\) the number of features, \(d\) the embedding dimension, and \(w\), \(N_{\text{global}}\), and \(r\) denote the sliding-window size, number of global tokens, and number of random links, respectively.

For \(M\) scales and a total of \(N_{\text{blocks}}^{\text{row}}\) Transformer layers distributed evenly across scales, the overall complexity becomes:

\[ O_{\text{total}} = \sum_{s \in \mathcal{S}} O(B \cdot n \cdot K_s \cdot (w + N_{\text{global}} + r) \cdot d) \cdot \frac{N_{\text{blocks}}^{\text{row}}}{M} \tag{26} \]

Since \( \sum_{s \in \mathcal{S}} K_s \approx m \left( 1 + \frac{1}{s_2} + \ldots + \frac{1}{s_M} \right) \) and typically \( w, N_{\text{global}}, r \ll m \), this simplifies to:

\[ O_{\text{total}} \approx O(B \cdot n \cdot m \cdot (w + N_{\text{global}} + r) \cdot d \cdot N_{\text{blocks}}^{\text{row}}) \tag{27} \]

compared to the dense attention cost of \( \mathcal{O}\!\big(B \cdot n \cdot m^{2} \cdot d \cdot N^{\text{row}}_{\text{blocks}}\big) \).

For typical hyperparameters \((m \in [10,100],\ w = 8,\ N_{\text{global}} = 4,\ r = 2)\), this results in a reduction from quadratic \( \mathcal{O}(m^{2}) \) to near-linear \( \mathcal{O}(m \cdot 14) \) complexity—achieving linear scaling while preserving both local and global feature dependencies.

3.5  Cross-Component Memory with Perceiver Architecture

While the column-wise embedding and row-wise interaction components of tabular transformers independently model feature- and sample-level dependencies, richer contextual understanding can emerge if information is shared across these components. However, direct cross-component communication poses a major risk to the in-context learning (ICL) paradigm: naive attention between components can leak test-set information, violating the principle that predictions for test samples must depend solely on training examples and the test input itself.

To overcome this limitation, we introduce a Perceiver-style latent memory module [19] that enables safe, leak-free communication between architectural components. This latent memory acts as a shared representation space that can be written to by training samples and read from by both training and test samples, ensuring compliance with ICL constraints while promoting global knowledge sharing.

In standard transformer-based tabular architectures such as TabICL [32], model components operate in a strictly sequential and isolated fashion:

  1. Column Embedding (TFcol): Encodes feature-wise statistics across samples to capture column-level distributions.
  2. Row Interaction (TFrow): Models dependencies across features within each sample.
  3. ICL Prediction (TFicl): Performs in-context learning to infer test labels from training examples.

This separation simplifies optimization and ensures ICL safety, but also introduces significant limitations:

  • No backward adaptation: Column embeddings cannot adjust based on row-level feature interactions.
  • Limited contextual refinement: Row-level interactions lack access to global, dataset-level statistics beyond static column embeddings.
  • Dataset isolation: Each dataset is processed independently, preventing cross-dataset generalization within a batch.

A fundamental ICL constraint is that test samples must not influence the model’s internal state in a way that affects training representations. Formally, letting

  1. Column Embedding (TFcol): Encodes feature-wise statistics across samples to capture column-level distributions.
  2. Row Interaction (TFrow): Models dependencies across features within each sample.
  3. ICL Prediction (TFicl): Performs in-context learning to infer test labels from training examples.

This separation simplifies optimization and ensures ICL safety, but also introduces significant limitations:

  • No backward adaptation: Column embeddings cannot adjust based on row-level feature interactions.
  • Limited contextual refinement: Row-level interactions lack access to global, dataset-level statistics beyond static column embeddings.
  • Dataset isolation: Each dataset is processed independently, preventing cross-dataset generalization within a batch.

A fundamental ICL constraint is that test samples must not influence the model’s internal state in a way that affects training representations. Formally, letting

\[ \mathcal{D}_{\text{train}} = \{(x_i, y_i)\}_{i=1}^{n_{\text{train}}}, \qquad \mathcal{D}_{\text{test}} = \{x_j\}_{j=n_{\text{train}}+1}^{n} \tag{28} \]

the prediction for a test sample \(x_j\) must satisfy:

\[ \mathbb{P}(\hat{y}_j \mid \mathcal{D}_{\text{train}}, \mathcal{D}_{\text{test}}) = \mathbb{P}(\hat{y}_j \mid \mathcal{D}_{\text{train}}, x_j) \tag{29} \]

That is, the prediction depends only on the training set and the test features, never on other test representations or their labels.

Perceiver-Style Latent Memory

Inspired by the Perceiver architecture [19], we introduce a learnable latent memory \( \mathbf{L} \in \mathbb{R}^{P \times d} \) with \(P\) memory slots. The key idea is:

  1. Write Phase (train-only): Memory attends to training representations to extract relevant global patterns.
  2. Read Phase (all samples): Both training and test samples attend to the memory to retrieve learned context, but cannot modify it.

This asymmetry guarantees ICL safety, since only training data influence the memory’s contents. The memory serves as a compressed, permutation-invariant summary of the training context that enables consistent feature refinement across samples.

The memory module is incorporated inside the ICL transformer (TFicl), refining the row embeddings before label injection and prediction. Given row embeddings \( \mathbf{H} \in \mathbb{R}^{B \times n \times d_h} \) (where \(B\) is the batch size and \(n\) the number of samples per dataset), the Perceiver memory transformation produces refined representations:

\[ \mathbf{R} = \text{PerceiverMemory}(\mathbf{H}, n_{\text{train}}) \in \mathbb{R}^{B \times n \times d_h} \tag{30} \]

with:

  • \( d_h = N_{\text{cls}} \cdot d \) — the hidden dimension after multi-head projection,
  • \( P \) — the number of latent memory slots (a hyperparameter),
  • \( n_{\text{train}} \) — the number of labeled training examples.

The Perceiver memory consists of three key stages, each composed of multiple cross-attention layers with residual connections and feed-forward transformations.

  1. Latent Memory Initialization:

    We initialize a set of \( P \) learnable latent vectors:

    \[ \mathbf{L}_0 \in \mathbb{R}^{P \times d_H} \tag{31} \]

    drawn from a truncated normal distribution \( \mathcal{N}(0, 0.02^2) \). These latents act as a universal memory bank, shared across all datasets in the batch and reused across forward passes, providing a stable foundation for information aggregation.

  2. Cross-Attention Block:

    At the core of the memory is a cross-attention mechanism allowing one representation to attend to another. Given query set \( \mathbf{Q} \) and key–value set \( \mathbf{KV} \), we define:

    \[ \text{CrossAttn}(\mathbf{Q}, \mathbf{KV}) = \text{softmax}\!\left( \frac{(\mathbf{QW}_Q (\mathbf{KW}_K)^{\top})}{\sqrt{d_k}} \right) (\mathbf{KW}_V) \tag{32} \]

    where \( \mathbf{W}_Q, \mathbf{W}_K, \mathbf{W}_V \in \mathbb{R}^{d_k \times d_k} \) are projection matrices and \( d_k = d_q / h \) is the per-head dimension. Each cross-attention block is followed by layer normalization, residual connections, and a feed-forward layer:

    \[ \begin{aligned} \mathbf{Q}' &= \text{LayerNorm}(\mathbf{Q}) \\ \mathbf{KV}' &= \text{LayerNorm}(\mathbf{KV}) \\ \mathbf{Z} &= \mathbf{Q} + \text{MultiHeadCrossAttn}(\mathbf{Q}', \mathbf{KV}') \\ \mathbf{Z}' &= \mathbf{Z} + \text{FFN}(\text{LayerNorm}(\mathbf{Z})) \end{aligned} \tag{33–36} \]

    This block structure ensures stable training and supports multi-head feature integration.

  3. Write Phase: Memory Encoding

    In the write phase, the memory attends to training samples only to extract and store relevant patterns. For each dataset \( b \) in the batch:

    \[ \mathbf{H}_{\text{train}}^{(b)} = \mathbf{H}^{(b)}[:, :n_{\text{train}}, :] \in \mathbb{R}^{n_{\text{train}} \times d_H} \tag{37} \]

    We initialize the dataset-specific memory as \( \mathbf{L}_0^{(b)} = \mathbf{L}_0 \). Then we apply \( N_{\text{write}} \) cross-attention blocks where memory latents query the training representations:

    \[ \mathbf{L}_{i+1}^{(b)} = \text{CrossAttnBlock}(\mathbf{Q} = \mathbf{L}_i^{(b)}, \mathbf{KV} = \mathbf{H}_{\text{train}}^{(b)}), \quad i = 0, \ldots, N_{\text{write}} - 1 \tag{38} \]

    The final encoded memory is:

    \[ \mathbf{L}^{(b)} = \mathbf{L}_{N_{\text{write}}}^{(b)} \in \mathbb{R}^{P \times d_H} \tag{39} \]

    Importantly, \( \mathbf{L}^{(b)} \) depends only on training representations, ensuring no test leakage.

  4. Read Phase: Sample Refinement

    In the read phase, all samples (training and test) attend to the memory to retrieve stored context. For dataset \( b \):

    \[ \mathbf{H}_0^{(b)} = \mathbf{H}^{(b)} \in \mathbb{R}^{n \times d_H} \tag{40} \]

    We apply \( N_{\text{read}} \) cross-attention blocks where sample queries attend to the memory:

    \[ \mathbf{R}_{i+1}^{(b)} = \text{CrossAttnBlock}(\mathbf{Q} = \mathbf{R}_i^{(b)}, \mathbf{KV} = \mathbf{L}^{(b)}), \quad i = 0, \ldots, N_{\text{read}} - 1 \tag{41} \]

    The final refined embeddings are:

    \[ \mathbf{R}^{(b)} = \mathbf{R}_{N_{\text{read}}}^{(b)} \in \mathbb{R}^{n \times d_H} \tag{42} \]

This asymmetric read–write design preserves the integrity of in-context learning:

  • Only training samples write to the memory
  • Both training and test samples read from it.
  • The memory functions as a shared, compressed abstraction of the training data that can be safely leveraged for inference.

The complete ICL forward pass with Perceiver memory is described in Algorithm 2:

Algorithm 2 — ICL with Perceiver Memory

0:  Row embeddings H ∈ ℝ^{B×n×d_h}, training labels y_train ∈ ℝ^{B×n_train}
0:  Predictions ŷ ∈ ℝ^{B×(n−n_train)×C} for C classes
1:  // Perceiver Memory (optional)
2:  if P > 0 then
3:      for each dataset b = 1 to B do
4:          H_train^(b) ← H^(b)[:, :n_train, :]    // Extract training samples
5:          L^(b)_0 ← L_0                         // Initialize memory
6:          // Write: Memory attends to training samples
7:          for i = 1 to N_write do
8:              L^(b)_i ← CrossAttnBlock(L^(b)_{i−1}, H_train^(b))
9:          end for
10:         // Read: All samples attend to memory
11:         R^(b) ← H^(b)
12:         for i = 1 to N_read do
13:             R^(b) ← CrossAttnBlock(R^(b), L^(b)_{N_write})
14:         end for
15:     end for
16:     R ← R    // Use refined embeddings
17: end if
18: // Label Injection (training samples only)
19: R[:, :n_train, :] ← H[:, :n_train, :] + OneHot(y_train)W_label
20: // ICL Transformer with Split Mask
21: H ← TF_ICL(H, attn_mask = n_train)   // Prevent test-to-train leakage
22: // Prediction Head
23: R ← LayerNorm(R)
24: logits ← FFN_decoder(R[:, n_train:, :])   // Predict test labels only
25: return logits
  

3.6  Dataset-wise In-Context Learning

After column-wise embedding, multi-scale sparse row-wise interaction, and optional cross-component memory refinement, each sample is represented by a fixed-dimensional row embedding:

\[ \mathbf{R} \in \mathbb{R}^{B \times n \times d_k} \tag{43} \]

where \( B \) is the number of datasets in the batch, \( n \) the total number of samples per dataset, and \( d_k \) the embedding dimension.

The final component, dataset-wise in-context learning (TF\(_{\text{icl}}\)), leverages these embeddings to predict test labels by conditioning on labeled training examples—all within a single forward pass and without any gradient-based parameter updates.

Formally, for each dataset \( b \) in the batch:

\[ \mathcal{D}^{(b)}_{\text{train}} = \{(\mathbf{R}^{(b)}_i, y^{(b)}_i)\}_{i=1}^{n_{\text{train}}} \tag{44} \]
\[ \mathcal{D}^{(b)}_{\text{test}} = \{\mathbf{R}^{(b)}_j\}_{j=n_{\text{train}}+1}^{n} \tag{45} \]

The objective is to predict test labels \( \hat{y}^{(b)}_j \) for \( j>n_{\text{train}} \) using in-context reasoning from training samples only:

\[ \hat{\mathbf{y}}_{\text{test}} = \mathrm{TF}_{\text{icl}}(\mathbf{R}, \mathbf{y}_{\text{train}}) \tag{46} \]

The ICL module consists of three main stages:

  1. Label Encoding and Injection: To ensure consistency across datasets with potentially different label spaces, training labels \( \mathbf{y}_{\text{train}} \in \mathbb{R}^{B \times n_{\text{train}}} \) are first normalized to contiguous indices:
    \[ \bar{y}_i = \mathrm{argsort}(\mathrm{unique}(\mathbf{y}_{\text{train}}))[\mathbf{y}_{\text{train}}[i]] \tag{47} \]
    mapping any label set (e.g., \(\{2,5,9\}\)) to \(\{0,1,2\}\). Normalized labels are embedded using one-hot encoding followed by a linear projection:
    \[ \mathbf{e}_y = \mathrm{OneHot}(\bar{\mathbf{y}}, C_{\max}) \cdot \mathbf{W}_y \in \mathbb{R}^{d_k} \tag{48} \]
    where \( C_{\max} \) is the maximum number of classes (e.g., \( C_{\max}=10 \)), and \( \mathbf{W}_y \in \mathbb{R}^{C_{\max}\times d_k} \) is a learned projection matrix.

    Label embeddings are injected only into training samples via additive combination:

    \[ \mathbf{R}[:, : n_{\text{train}}, :] \leftarrow \mathbf{R}[:, : n_{\text{train}}, :] + \mathbf{e}_y(\mathbf{y}_{\text{train}}) \tag{49} \]
    ensuring test samples remain unaffected and ICL constraints are preserved.
  2. Split-Masked Transformer: The augmented embeddings \(\mathbf{R}\) are processed by a split-masked Transformer, enforcing ICL-safe attention between training and test samples. The attention mask \(\mathbf{M}_{\text{split}}\) is:
    \[ \mathbf{M}_{\text{split}}[i,j] = \begin{cases} 0, & i \le n_{\text{train}} \ \text{and}\ j \le n_{\text{train}} \quad (\text{train-to-train})\\[2pt] 0, & i > n_{\text{train}} \ \text{and}\ j \le n_{\text{train}} \quad (\text{test-to-train})\\[2pt] -\infty, & i \le n_{\text{train}} \ \text{and}\ j > n_{\text{train}} \quad (\text{train-to-test: blocked})\\[2pt] 0, & i > n_{\text{train}} \ \text{and}\ j > n_{\text{train}} \quad (\text{test-to-test}) \end{cases} \tag{50} \]
    • No leakage from test to train samples.
    • Training samples attend only to training samples.
    • Test samples attend to training samples and other test samples.
  • No leakage from test to train samples.
  • Training samples attend only to other training samples (learn from labeled context).
  • Test samples attend to training samples and other test samples (contextual reasoning).

The Transformer applies \(N_{\text{icl}}\) blocks of multi-head self-attention and feed-forward layers:

\[ \mathbf{H}^{(0)} = \mathbf{R} \tag{51} \]
\[ \mathbf{H}^{(\ell+1)} = \mathrm{TransformerBlock}\!\left(\mathbf{H}^{(\ell)}, \mathbf{M}_{\text{split}}\right), \quad \text{for } \ell = 0,\ldots, N_{\text{icl}}-1 \tag{52} \]

with the final output normalized via:

\[ \mathbf{H} = \mathrm{LayerNorm}\!\left(\mathbf{H}^{(N_{\text{icl}})}\right) \tag{53} \]

Prediction head

Test sample representations \( \mathbf{H}[:,\, n_{\text{train}}{:},\, :] \) are passed through a two-layer MLP decoder:

\[ \mathbf{z} \;=\; \mathrm{GELU}\!\left(\mathbf{H}[:,\, n_{\text{train}}{:},\, :] \mathbf{W}_1 + \mathbf{b}_1\right) \;\in\; \mathbb{R}^{B \times n_{\text{test}} \times 2d_q} \tag{54} \]
\[ \mathrm{logits} \;=\; \mathbf{z}\mathbf{W}_2 + \mathbf{b}_2 \;\in\; \mathbb{R}^{B \times n_{\text{test}} \times C_{\max}} \tag{55} \]

Predictions are obtained via softmax with temperature \( \tau \):

\[ \hat{\mathbf{y}}_{\text{test}} \;=\; \mathrm{softmax}\!\left(\mathrm{logits}[:, :, :K] / \tau\right) \tag{56} \]

where \( K \) is the number of classes in the current dataset (inferred from training labels), and \( \tau = 0.9 \) by default. When \( K > C_{\max} \) (e.g., \(K > 10\)), we employ a hierarchical classification strategy:

  1. Grouping: Partition \( K \) classes into \( G = \lceil K / C_{\max} \rceil \) balanced groups.
  2. First-level prediction: Predict which group a test sample belongs to.
  3. Second-level prediction: For each group, train a classifier on the subset of classes within that group.
  4. Combination: Multiply group probability with intra-group probability to obtain final prediction.

This hierarchical mechanism preserves the ICL paradigm while scaling to hundreds of classes.

During pretraining, the model is trained with cross-entropy loss on test samples:

\[ \mathcal{L} = -\frac{1}{B \cdot n_{\text{test}}} \sum_{b=1}^{B} \sum_{j = n_{\text{train}} + 1}^{n} \log p\!\left(y^{(b)}_{j}\,\middle|\, \mathbf{R}^{(b)}, \mathbf{y}^{(b)}_{\text{train}}\right) \tag{57} \]

Critically, gradients flow through the entire architecture (column embedding, row interaction, memory, ICL transformer, decoder) in an end-to-end manner, enabling the model to learn representations optimized for in-context learning.

4  Experimental Evaluation

We conduct a comprehensive evaluation of Orion-MSP. Below, we describe our experimental setup and present detailed results.

4.1  Experimental Setting

4.1.1  Benchmark Suites and Datasets.

Our experimental evaluation spans three widely recognized benchmark suites: TALENT [40] (181 automatically discovered classification datasets), OpenML-CC18 [2] (72 curated datasets), and TabZilla [29] (36 heterogeneous tasks). Together, these benchmarks enable a comprehensive assessment across diverse tabular learning scenarios. In addition, we perform domain-specific evaluations in high-impact application areas such as healthcare and finance to examine the real-world relevance of our method. All experiments strictly follow the official dataset splits provided by each benchmark to ensure reproducibility and fairness.

For consistency across model families, results are reported only on the intersection of datasets available to all evaluated models within each benchmark suite. This unified evaluation protocol ensures that observed performance differences arise from methodological advances rather than variations in dataset coverage. After filtering, our evaluation encompasses 154 of 181 datasets from TALENT, 63 of 72 from OpenML-CC18, and 27 of 36 from TabZilla. A small number of datasets were excluded due to out-of-memory (OOM) errors or CUDA-related issues, primarily affecting TabPFN-based architectures even on H200 GPUs.

Finally, we emphasize that models with higher mean ranks may not always achieve the highest absolute accuracy or F1-scores on every dataset. Rankings are computed per dataset and then averaged across all datasets, providing a normalized indicator of overall consistency rather than peak task-specific performance. In contrast, absolute metrics highlight maximum achievable performance on individual tasks. Comprehensive dataset statistics are presented in Appendix A.4.

4.1.2  Models and Baselines.

We compare our model with six state-of-the-art tabular foundation models: TabPFN [16], TabICL [32], OrionBiX, Mitra, ContextTab [35], and TabDPT [28]. In addition, we include established traditional baselines using autogloun [7] such as XGBoost, LightGBM, CatBoost, and Random Forest as strong reference models for comparison.

4.1.3  Hardware Configuration.

Experiments are executed on NVIDIA L40S GPUs, with H200 GPUs used for memory-intensive cases. This infrastructure ensures consistent execution across all experiments while handling the computational demands of large transformer-based models.

4.1.4  Evaluation Metrics.

Our evaluation considers two complementary aspects:

Performance. We measure predictive capability using standard classification metrics—Accuracy (ACC), AUC-ROC, and weighted F1-score (F1)—computed across the benchmark suites TALENT, OpenML-CC18, and TabZilla. These benchmarks encompass datasets with diverse characteristics, including varying sample sizes, feature dimensionalities, and class balance, allowing a comprehensive assessment of model generalization.

Scalability. We further analyze model robustness as dataset complexity increases by examining performance trends with respect to sample size, feature dimensionality, and class imbalance. This analysis uses the same benchmark datasets, aggregated along these axes to reveal systematic scalability behaviors and guide practical model selection.

4.2  Results

Table 1 summarizes results across the TALENT, OpenML-CC18, and TabZilla benchmark suites, reporting mean rank, classification accuracy (ACC), and weighted F1-score (F1) for all evaluated models.

Our experiments confirm that classical machine learning methods remain strong baselines, achieving mean accuracies between 0.833 and 0.861 with aggregated ranks around 6.0. In contrast, pretrained tabular foundation models (TFMs) demonstrate superior generalization, even without task-specific fine-tuning. Notably, our model, Orion-MSP, achieves the best overall zero-shot rank of 3.58, with ACC/F1 scores of 0.8461/0.8360 on TALENT, 0.8722/0.8676 on OpenML-CC18, and 0.8821/0.8786 on TabZilla.

TabPFN follows closely, attaining an overall rank of 4.61 and scores of 0.8514/0.8412 on TALENT and up to 0.8752/0.8716 on TabZilla. TabDPT ranks 5.42, achieving 0.8408/0.8318 on TALENT and 0.8814/0.8775 on TabZilla. By contrast, Mitra (rank 11.77, ACC < 0.40) and ContextTab (rank 9.70) perform substantially worse, highlighting the advantages of hierarchical multi-scale processing and efficient attention in Orion-MSP.

Overall, TabPFN and Orion-MSP emerge as the strongest models, with ACC ranging from 0.85 to 0.88 and ranks between 3.26 and 4.61. Orion-MSP peaks on OpenML-CC18 (rank 4.12, ACC 0.8722) and TabZilla (rank 3.84, ACC 0.8821), while TabPFN leads on TALENT (ACC 0.8514) and maintains stable performance across all benchmark suites.

To further investigate the sources of Orion-MSP’s performance gains, we analyze results across key dataset characteristics. All analyses partition datasets based on inherent properties rather than performance outcomes.

Dataset Size. Table 2 reports model performance aggregated by dataset size: Small (<1​K samples), Medium (1K-10K), and Large (>10). Performance trends reveal that Orion-MSP consistently performs well across small, medium, and large datasets. Classical ML models such as XGBoost excel on large datasets due to abundant training examples, achieving the highest ACC/F1 in the >10​K sample category. Orion-MSP, however, maintains competitive performance across all size categories, outperforming most baselines on small and medium datasets. This demonstrates the ability of multi-scale sparse attention to generalize effectively in low-data regimes while scaling gracefully to larger datasets. TabPFN also performs strongly, particularly on medium-sized datasets, but Orion-MSP’s consistent performance across size scales highlights the robustness of its hierarchical and sparse design.

Feature Dimensionality. Table 3 presents performance trends across narrow (<10 features), medium (10 - 100) and wide ( >100) datasets. When evaluating dataset width, Orion-MSP shows the highest accuracy on narrow datasets (<10 features) and strong performance on medium and wide datasets (10–100 and >100 features). This suggests that sparse multi-scale attention enables effective learning even in high-dimensional feature spaces, where dense models such as TabICL exhibit diminished scalability to high-dimensional feature spaces.

Based on Class Imbalance. Partitioning datasets based on class balance reveals that Orion-MSP achieves its strongest gains on imbalanced datasets. The model ranks second in this category, achieving ACC = 0.8840 and F1 = 0.8731. This highlights that multi-scale sparse attention amplifies signals from underrepresented classes while avoiding overfitting to dominant classes. On balanced datasets, performance gains are smaller, suggesting that the architectural complexity of Orion-MSP is most advantageous when datasets exhibit skewed distributions. In comparison, TabPFN maintains strong performance on both balanced and imbalanced datasets, but Orion-MSP’s design more effectively addresses minority-class patterns due to hierarchical attention and cross-scale reasoning.

Domain-specific Analysis. Domain-wise evaluation provides deeper insight into Orion-MSP’s strengths (Table 5):

  • Medical datasets: Orion-MSP achieves the highest ACC = 0.8045 and F1 = 0.7916, ranking second overall behind Orion-BiX. These datasets often involve hierarchical biological structures and complex interdependencies among features, which align naturally with Orion-MSP’s multi-scale representation. Fine-grained scales capture local dependencies, while coarser scales aggregate contextual information, leading to improved predictive accuracy.
  • Finance datasets: Orion-MSP ranks first in mean rank (4.60), achieving ACC = 0.8158 and F1 = 0.8047. Financial datasets frequently involve layered dependencies between assets, instruments, and market indicators. Orion-MSP’s cross-component memory allows information to propagate across scales, capturing global dependencies that standard dense transformers or classical ML models fail to exploit.

Overall, domain-specific results highlight that Orion-MSP excels in high-dimensional, context-rich datasets, where hierarchical patterns and feature correlations are prevalent.

Deep Analysis and Interpretation

A detailed examination by dataset characteristics demonstrates why Orion-MSP’s design is most effective under certain conditions:

  • Class imbalance: Multi-scale sparse attention amplifies underrepresented patterns without overfitting to majority classes. Minority-class recognition improves substantially on datasets where the minority class constitutes less than 30% of the data. Balanced datasets show smaller gains, indicating that the hierarchical complexity is most beneficial in skewed settings.
  • Hierarchical structure and cross-component memory: In domains such as healthcare and finance, datasets involve natural hierarchies and complex inter-feature relationships. Orion-MSP’s multi-scale design allows it to reason at both fine-grained and coarse-grained levels. Sparse attention reduces computational cost and provides implicit regularization, mitigating overfitting in high-dimensional or correlated-feature settings. Cross-component memory further enables information exchange across scales without violating ICL safety, enhancing performance on context-dependent tasks.
  • Computational efficiency: Linear attention complexity with respect to feature number and attention window size allows Orion-MSP to scale to high-dimensional tables. Memory usage grows proportionally with input dimensions, making the model practical for large real-world datasets, unlike dense attention alternatives with quadratic scaling.

In short, fine-grained scales capture subtle minority-class patterns, while coarser scales aggregate global context, yielding a balanced representations of local and global dependencies. Sparse attention improves efficiency and regularization, reducing overfitting in high-dimensional or correlated-feature settings. The Perceiver memory enhances the model’s capacity to store and retrieve non-local patterns, enabling cross-scale reasoning—particularly valuable in context-dependent domains. However, the added architectural complexity offers limited benefit for simpler, low-dimensional datasets, suggesting future directions in adaptive designs with data-driven scale selection and dynamic sparsity control.

5  Conclusion

In this work, we introduced Orion-MSP, a novel tabular in-context learning model that leverages multi-scale sparse attention and cross-component memory to capture both fine-grained and coarse-grained dependencies in tabular data. Through extensive experiments across diverse benchmark suites—including TALENT, OpenML-CC18, and TabZilla—as well as domain-specific datasets in healthcare and finance, we demonstrated that Orion-MSP consistently achieves state-of-the-art zero-shot performance, particularly on imbalanced, high-dimensional, and context-rich datasets.

Our detailed analyses highlight that the hierarchical design, sparse attention, and cross-component memory collectively contribute to robust generalization, efficient computation, and improved representation of complex interdependencies. These architectural choices enable Orion-MSP to outperform existing tabular foundation models in challenging real-world scenarios while maintaining practical scalability.

Nonetheless, we observe that the benefits of multi-scale sparse attention are less pronounced on simple, low-dimensional datasets, where the additional architectural complexity may not be fully leveraged. This limitation motivates future work on adaptive scale selection and data-aware sparsity scheduling, allowing model complexity to adjust dynamically to dataset characteristics. Such extensions could further enhance both efficiency and generality, enabling Orion-MSP to provide strong performance across the full spectrum of tabular learning tasks.

In summary, Orion-MSP represents a promising step toward scalable, adaptive, and context-aware tabular in-context learning, with significant potential for real-world applications and future improvements in dynamic model adaptation.

Subscribe to Lexsi

Stay Up to Date With All the News & Updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.